Uncommon Descent Serving The Intelligent Design Community

Questioning The Role Of Gene Duplication-Based Evolution In Monarch Migration

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Each year about 100 million Monarch butterflies from Canada and northeastern United States make their journey to the Mexican Sierra Madre mountains in an astonishing two-month long migration (Ref 1).  They fly 2500 miles to a remote area that is only 60 square miles in size (Ref 1).  No one fully understands what triggers this mass movement of Lepidopterans.  But there is no getting away from the fact that this is a phenomenon that, as one review summed up, “staggers the mind”, especially when one considers that these butterflies are freshly-hatched (Ref 1).  In short, Monarch migrants are always “on their maiden voyage” (Ref 2).  The location they fly to is home to a forest of broad-trunked trees that effectively retain warmth and keep out rain- factors that are essential for the Monarchs’ survival (Ref 1).
 
With a four-inch wingspan and a weight of less than 1/5th of an ounce, it is remarkable that the Monarchs survive the odyssey (Ref 1).  Making frequent stops for nectar and water, they fly approximately 50 miles a day avoiding all manner of predator.  Rapidly shifting winds over the great lakes and scorching desert temperatures in the southern states provide formidable obstacles (Ref 1).  Nevertheless the Monarchs’ finely-tuned sense of direction gets most of them across.
 
It was not until 1975 that scientists first uncovered the full extent of the Monarch’s migration (Ref 1).  What has become clear since then is that only Monarchs travel such distances to avoid the “certain death of a cold winter”.   According to University of Toronto zoologist David Gibo, soaring is the key to making it to Mexico (Ref 1). Indeed flapping wings is about the most energy inefficient way of getting anywhere.  Other aspects of the Monarch’s migration-linked behaviors, such as the reproductive diapause that halts energy-draining reproductive activity during its journey, continue to fascinate scientists worldwide (Ref 2).  Both diapause and the 6-month longevity characteristic of Monarchs are caused by decreased levels of Juvenile Hormone which is itself regulated by four genes (Ref 2).
 
Exactly how Monarchs navigate so precisely to such a specific location is a subject of intense debate.  One theory suggests that they respond to the sun’s location, another that they are somehow sensitive to the earth’s magnetic field (Ref 1).  Recent molecular studies have shown that Monarchs have specialized cells in their brains that regulate their daily ‘clock’ and help keep them on course (Ref 3).  Biologist Chip Taylor from the University of Kansas has done some remarkable tagging experiments demonstrating that even if Monarchs are moved to different locations during the course of their journey south, they are still able to re-orient themselves and continue onwards to their final destination (Ref 1). 
 
A study headed by Stephen Rappert at the University of Massachusetts has elucidated much of the biological basis of the timing-component of Monarch migration (Ref 3).  Through a process better known as time-compensated sun compass orientation, proteins with names such as Period, Timeless, Cryptochrome 1 and Cryptochrome 2 provide Monarchs with a well-regulated light responsiveness during both day and night (Ref 3).  While Cryptochrome 1 is a photoreceptor that responds specifically to blue light, Cryptochrome 2 is a repressor of transcription, efficiently regulating the period and timeless genes during the course of a 24-hour light cycle (Ref 3).  Investigations using Monarch heads have not only provided exquisite detail of the daily, light-dependent oscillations in the amounts of these proteins but have also revealed a ‘complex relationship’ of molecular happenings. 
 
Indeed, the activities of both Cryptochrome 2 and Timeless are intertwined with at least two other timing proteins called ‘Clock’ and ‘Cycle’ (Ref 3).  Preliminary results suggest that Period, Timeless and Cryptochrome 2 form a large protein complex, with Cryptochrome 2 being a repressor of Clock and Cycle transcription.  Cryptochrome 2 is also intimately involved with an area of the Monarch’s brain called the Central Cortex that likely houses the light-dependent ‘sun compass’, so critical for accurate navigation (Ref 3).
 
Rappert’s team have speculated that the Monarch’s dual Cryptochrome light response system evolved into the single Cryptochrome systems found in other insects through a hypothetical gene loss event (Ref 3).  Furthermore they have suggested that the dual Cryptochrome system itself arose through a duplication of an ancestral gene (Ref 3).  Biologist Christopher Wills wrote of gene duplication as a ‘rare occurrence’ in which “an extra copy of a gene gets placed elsewhere in the genome” (Ref 4, p.95).  Seen from an evolutionary perspective, these two gene copies are then “free to evolve separately…shaped by selection and chance to take on different tasks” (Ref 4, p.95).
 
While experiments have shown that transgenic Monarch Cryptochrome 1 can rescue Cryptochrome deficiency in other insects such as fruit flies, what still remains elusive is how exactly gene duplication could have lead to two proteins with such widely-differing functions as those found in the two Monarch Cryptochromes.  Indeed biochemist Michael Behe has been instrumental in revealing the explanatory insufficiencies of terms such as gene duplication and genetic shuffling within the context of molecular evolution.  As Behe expounded:
 
“The hypothesis of gene duplication and shuffling says nothing about how any particular protein or protein system was first produced- whether slowly or suddenly, or whether by natural selection or some other mechanism….. In order to say that a system developed gradually by a Darwinian mechanism a person must show that the function of the system could “have formed by numerous, successive slight modifications”…If a factory for making bicycles were duplicated it would make bicycles, not motorcycles; that’s what is meant by the word duplication.  A gene for a protein might be duplicated by a random mutation, but it does not just “happen” to also have sophisticated new properties” (Ref 5, pp.90, 94).
 
When it comes to supplying a plausible mechanism for how gene duplication and subsequent natural selection led to two distinctly functioning Cryptochromes and how these then integrated with other time-regulatory proteins in Monarch brains, there is a noticeable absence of detail.  Each successive slight modification of a duplicated gene would have had to confer an advantage, for selection and chance to get anywhere.  Furthermore the newly duplicated Cryptochrome would have had to have become successfully incorporated into a novel scheme of daylight processing for migration patterns to begin. 
 
Evolutionary biology must move beyond its hand-waving generalizations if it is to truly gain the title of a rigorous scientific discipline.  In the meantime, protein systems such as the Monarch’s Cryptochromes will continue to challenge what we claim to know about evolutionary origins.
     
References
1. NOVA: The Incredible Journey Of The Butterflies, Aired on PBS on the 27th January, 2009, See http://www.pbs.org/wgbh/nova/butterflies/program.html
 
2. Haisun Zhu, Amy Casselman, Steven M. Reppert (2008), Chasing Migration Genes: A Brain Expressed SequenceTag Resource for Summer and Migratory Monarch Butterflies (Danaus plexippus), PLoS One, Volume 3 (1), p. e1345
 
3. Haisun Zhu, Ivo Sauman, Quan Yuan, Amy Casselman, Myai Emery-Le, Patrick Emery, Steven M. Reppert (2008), Cryptochromes Define a Novel Circadian Clock Mechanism in Monarch Butterflies That May Underlie Sun Compass Navigation, PLoS Biology, Volume 6 (1), pp. 0138-0155
 
4. Christomper Wills (1991), Exons, Introns & Talking Genes: The Science Behind The Human Genome Project, Oxford University Press, Oxford UK
 
5. Michael Behe (1996), Darwin’s Black Box, The Biochemical Challenge To Evolution,  A Touchstone Book Published By Simon & Schuster, New York

 

Copyright (c) Robert Deyes, 2009

Comments
Tim:
Well, I don’t know much about computers, but how hard would it be to demonstrate this? I mean, just take “Deep Blue” and his nasty cousin “Even Deeper Blue”, remove any software concerning the rules concerning games that are draws, and see what happens.
You seem to be saying, "Remove the software necessary for calling a draw, and see if the computer calls a draw." I don't see how this demonstrates anything.
I’d suggest that it is because the Turing Machine is, by definition, responding to the tape (or input).
I'm afraid I don't follow your reasoning at all here.
The problem is not that the chess board positions are finite, the problem is that computer chess programs have no idea what to do with that information.
I don't understand. Who said that the finitude of a chess board is a problem? And whatever information you're referring to, why can't chess programs be designed to know what to do with it?
Trust me, ROb, if you put a king of each color on a chess board and tell two computers to play until checkmate, that is an infinite system. Truth be told, I don’t even know what you mean by infinite system, but I have watched two nine year olds play chess.
Infinite refers to the number of possible states, as in the previous sentence. Sorry I wasn't more clear on that. Again, it's quite trivial to program a computer to detect all non-halting situations in an 8x8 chess board. Having it detect some some of those situations in a reasonably efficient manner is non-trivial, but certainly doable.
John Searle takes it seriously. (I’ve been doing a little reading — a dangerous thing.)
Searle's Chinese Room thought experiment actually argues against functionalism, a philosophical notion. One can theoretically reject functionalism without rejecting the computability of human mental activity. To the extent that Searle's argument is interpreted as an argument against such computability, it's not taken seriously by computing theorists. I don't recall ever seeing a defense of it by anyone in a computer field.
...I found evidence that humans can think “off the tape” in ways that are not simply input/compute/output that suggest no diffulties with the halting problem that computers face...
Then let me be the first to congratulate you, and I expect to see your name in lights soon.
“In fact, if you could prove that humans are capable of anything that computers are in principle incapable of, you would be quite famous.”–ROb Aw shucks, ok, here are several things that computers are incapable of doing in principle that humans find quite easy to do:
Asserting isn't proving. Saying that computers are in principle incapable of doing something means that they are logically, not just technologically (currently), incapable. So you should be able to provide a logical proof. R0b
ROb (and the chess problem) I wrote that two computers (king v king) will never stop chasing each other around a board and say (insert HAL voice here), "How about a draw, Dave?" unless it is part of their program. I wrote: Two computers NEVER will. BOb responded: "If you could prove that, you would revolutionize the world of computing theory." Well, I don't know much about computers, but how hard would it be to demonstrate this? I mean, just take "Deep Blue" and his nasty cousin "Even Deeper Blue", remove any software concerning the rules concerning games that are draws, and see what happens. Now, before we go down the path of computers that are far more complex and thus may have a fighting chance of coming up with, "how about a draw?" (Right ROb?) I mentioned the halting problem. More specifically I mentioned applying the halting problem solution over Turing Machines. ROb correctly stated, "The halting problem simply says that no computer can detect all conceivable non-halting situations." Here is where I would ask onlookers to consider the reasoning that underlies that difficulty. I'd suggest that it is because the Turing Machine is, by definition, responding to the tape (or input). "In fact, it’s trivial to write a program that will detect (albeit inefficiently) any non-halting situation for any system with a finite number of possible states, like a chess board."--ROb The problem is not that the chess board positions are finite, the problem is that computer chess programs have no idea what to do with that information. "Halting problem issues come into play only if . . . it needs to handle infinite systems."--ROb Trust me, ROb, if you put a king of each color on a chess board and tell two computers to play until checkmate, that is an infinite system. Truth be told, I don't even know what you mean by infinite system, but I have watched two nine year olds play chess. "And there is no evidence that humans can do better than computers in this regard."--ROb Well, there is some evidence. See, years ago very soon after chess was invented, someone invented the idea of drawing a chess game. "More succinctly stated, your claim is that humans are not computers in the computing theoretic sense. That is a claim that nobody has been able to validate and is not taken seriously in computing theory."--ROb John Searle takes it seriously. (I've been doing a little reading -- a dangerous thing.) "That’s one of the reasons that the ID dichotomy of intelligence vs. chance+law is either poorly conceived or poorly stated."--ROb Because I still don't agree with what you've written about my chess game example, and because I found evidence that humans can think "off the tape" in ways that are not simply input/compute/output that suggest no diffulties with the halting problem that computers face, I simply don't believe that the ID position on intelligence is ill-conceived. I know I haven't linked the innovation that humans exhibit to a type of solution for the halting problem in a rigorous matter, but that's because I haven't fully understood it's application to human thought and how that would work. Never-the-less, two nine year olds knew to offer each other a draw and it only took twenty minutes of trash-talking to figure it out. Let's see a computer do that!!! "In fact, if you could prove that humans are capable of anything that computers are in principle incapable of, you would be quite famous."--ROb Aw shucks, ok, here are several things that computers are incapable of doing in principle that humans find quite easy to do: Prefer buffalo wings over chicken fingers. Write crummy poetry while believing it is quite good. Tie shoelaces while thinking about something else. Hope. Computers are in principle incapable of prefering, believing, doing something they are not thinking about, thinking itself, or hoping, and these are all things I do fairly often. Oh and one more, If I am not mistaken, computers can't, in principle, rebel. Tim
One other thing - he says his formula showed "approximately 0" FITS for the RSC and OSC sequences, but in the table its exactly 0. And how it could be "approximately" rather than exactly 0 is unclear, given how he's measuring it. JT
KF: Just to review, you provided a link to a table from the Durston paper, presumably in reply to comments by me to the effect that no one had formalized the concept of FCSI. So I do stand corrected in that regard. Also, I need to mention now that I also said no one had offered a proof as to the percentage of strings exhibiting FCSI as has been done for CSI. I am now aware however of that video by Durston where he provides some sort of proof pertaining to FSCI and evolution. However, I have not watched that yet, as I have been focussed on understanding the original paper on measuring FSC, from which the table you provided was taken. And what is of interest to me the most now is Durston's statement to the effect that his measure can distinguish between RSC OSC and FSC (i.e. randomness, necessity, and intelligent design). This is of interest because in the Abel paper (to which Durstons alludes repeatedly) and elsewhere in Durston's own writings are assertions regarding the inability of OSC sequences to exhibit functionality (the implication being that nondeterminstic agents are required to produce FSC.) Durston for his part seems unequivocal in the paper that his formula distinguishes FSC from RSC and OSC:
The results for the array of random sequences and for a 50-mer polyadenosine sequence formed on Montmorillonite show that ?Hf distinguishes FSC from RSC and OSC. The results for the array of random sequences are shown in the second from the last row of Table , and indicate that random sequences, which are an example of RSC, tend to have an FSC of approximately 0 Fits. The results of the highly ordered 50-mer polyadenosine, which is an example of OSC, are shown in the last row of Table , and indicate an FSC of approximately 0 Fits. This is consistent with Abel and Trevors' prediction that neither OSC nor RSC can contain the functional sequence complexity observed in biosequences. [emphasis added]
and then again in the conclusion:
This method successfully distinguishes between FSC and OSC, RSC, thus, distinguishing between order, randomness, and biological function.
These statements seem quite unequivocal, and yet there is no further elaboration in the paper itself. All the rest of the discussion concerns the functional sequences. But upon closer inspection his claims to be able to distinguish RSC and OSC from FSC seem to be blatantly false: Durston measures FSC relative to a "ground state":
The measure of Functional Sequence Complexity, denoted as ?, is defined as the change in functional uncertainty from the ground state H(Xg(ti)) to the functional state H(Xf(ti)), or ? = ?H (Xg(ti), Xf(tj)).(6) The resulting unit of measure is defined on the joint data and functionality variable, which we call Fits (or Functional bits).
Earlier Durston says there are two alternatives for a ground state, either a highly ordered ground state or a "null state" that is a ground state that is not highly ordered, but rather completely random. Apparently, it is merely a matter of convenience as to which ground state you choose. The following is his justification for choosing the null state as the ground state for measuring FSC in functional sequences:
for proteins, the data indicates that, although amino acids may naturally form a nonrandom sequence when polymerized in a dilute solution of amino acids [30], actual dipeptide frequencies and single nucleotide frequencies in proteins are closer to random than ordered [31]. For this reason, the ground state for biosequences can be approximated by the null state
However, what is being used as a ground state for a random sequence or ordered sequence? We find that in the footnotes of the table:
All values, except for the OSC example, which was calculated from the constrained ground state required to produce OSC, were computed from the null state
IOW, For both functional sequences and the random sequences the ground state is a random sequence. So this explains why the FSC for randomness is 0 as the ground state its measured relative to is also random. (And furthermore functional uncertainty for a random sequence is independent of the actual sequence, i.e. all random sequences will have the same functional uncertainty.) For the one ordered sequence, the ground state is evidently that same ordered sequence and that is why its FSC is also 0. SO we know now how he measures random sequences and the ordered sequence as both 0 in FSC. Its completely arbitrary. So to compare that to positive values for a functional sequence and then say you've distinguished functional complexity from randomness and order is ludicrous, IMO. At this point I sort of feel like I'm piling on, but the truth is the truth. Note: I said his choice of a ground state was arbitary, but presumably his rationale might be that he was choosing the ground state closest to what he was measuring. So he said he chose randomness as the ground state for a functional sequence because randomness was the closer to a functional sequence than an ordered ground state. And so for measuring FSC in the random string he chose a random string as ground state, and for the ordered string, the same ordered string as a ground state. This is the best I as a laymen could come up with after devoting 2 hours or so to the paper. If I had some glaring oversight or misunderstanding, please some one do point it out. JT
CJYMan:
ROb:
“By the same token, the question of whether a system is capable of foresight is not the same as the question of whether foresight is required to originate the system.”
That is correct and that is the question upon which this debate revolves.
I assume you're referring to the second question, not the first. None of my points in this thread pertain to the second question, so apparently I'm in the wrong debate. WRT the first question, your answer is that computers can have artificial foresight. You seem to differentiate artificial from non-artificial foresight on the basis of consciousness. If this is the ID position, then the onus is on the ID camp to define consciousness and show that computers are in principle incapable of it. That's a pretty tough row to hoe. R0b
KF: I had said I was going to study Durston's paper and the Abel Paper [143], (to which Durston refers to repeatedly) before getting back to you. Durston actually presents his paper as being an extension of the one from Abel and he makes it clear from the outset that he ascribes completely with the former. So I did study the Abel paper in detail last night. There are no proofs as such in it. Rather, Its essentially a challenge, "Here's some assertions we're making - now prove us wrong." And they do make repeated assertions to the effect that law-like determinism cannot create functional information. And it is absolutely clear that the sticking point for them is the determinism aspect of it, that is that a law-like process cannot make "choices". There are repeated reference to agents making "choices". And it is patently clear that they mean "nondeterminstic" agents. I have several quotes from the paper but decided not to list them here. I am still going over the Durston paper. The strongest statement he makes is in the conclusion where he says he has successfully distinguished RSC OSC and FSC via his method. If he meant to imply that he can distinguish something created via a determinsitic process vs. something created by a nondeterminstic agent, that is not actually possible (or else he would be world famous by now). As the table you provided shows, he measured one "ordered" seqeunce, 50-mer polyadenosine, using his method and got a result of 0 fits. He also measured some random sequences and got approximately 0 fits for those as well. However, what I'm not clear on, is whether it was simply a matter of him bringing background knowledge to the table that the ordered sequence and the random sequence had no known function and thus got a 0 via his method. Or, otoh, were these results essentially inevitable because of his own defintions. He states, "The measure of Functional Sequence Complexity, denoted as ?, is defined as the change in functional uncertainty from the ground state H(Xg(ti)) to the functional state H(Xf(ti))..." However he previously defines the ground state as being either a random sequence or a higly ordered sequence. So if he's defining FSC as the change from the ground state, then it would seem a foregone conclusion that a random or ordered sequence would show 0 fits. JT
CJYMan [168]: So, have you ever wanted to go to a location that you have never been (obviously are not at in the present)? Did you then pull out a map and plan your present actions based on a future destination? This is a perfect example of teleology in action. So those Monarch butterflies are all saying "Whoohoooo! We're going to Mexico! Ole!" Do we presume they even have conscious awareness of being on a journey? JT
ROb: "We need to be careful not to conflate causation with characterization. The question of whether a system reduces to matter, energy, and physical laws, is not the same as the question of whether it originated by matter, energy, and physical laws." That is exactly the point which I have been trying to make for months and especially since I commented on that post in this blog re: Dembski and Marks paper on active info. ROb: "By the same token, the question of whether a system is capable of foresight is not the same as the question of whether foresight is required to originate the system." That is correct and that is the question upon which this debate revolves. Is there a way to solve it? Can we form a falsifiable hypothesis one way or the other and then test it and potentially gather evidence in favor of it? I believe the answer to that question is "yes" and that ID Theory is one way to do so. In fact, the papers on active info (By Dembski and Marks) are extremely relevant to this question, but I'd rather not go here quite yet. Let's lay some groundwork first. JT and I are presently discussing the groundwork and you are obviously free to join in. Oh, and if you are going to *adamantly* take the opposing position, please provide evidence for it. It is much easier to be a critic (pointing out any little hole) than to defend a position. Scientific discussion after all usually revolve around finding the best of two positions, not merely poking little holes in one position. That is exactly why an ID Theory which attempts to merely poke holes in "evolution" is doomed to failure; and also why I would appreciate you defending the opposing position with evidence -- if indeed you are going to defend the opposing position as "more likely to be true" -- instead of merely poking little holes in everything I state. ROb: "Reading the above quoted paragraph, it seems to me that CJYMan is okay with the concept that a computer is capable of foresight, with the caveat that this capability must be designed into the computer by a foresighted designer. Is that correct, CJYMan, or is a computer’s foresight necessarily not real foresight?" A computer is capable of artificial foresight or the application of previous foresight since they are not yet capable of being aware of future goals yet they can produce some of the same patterns that true foresight realizes by artificially modeling a foresight process -- modeling future possibilities and harnessing law, chance and previous instructional information to accomplish future targets. However, take out the instructional information (which is neither defined by law or chance) and you are left with only law and chance and there is no evidence that these will produce anything resembling a model of foresight. So is there a way to mathematically characterize this instructional information? Is there a way to discover the best cause for this instructional information? For now, let's go back to JT and my discussion to discover the fundamentals of ID Theory. P.S. Of course a key point is that every example of artificial foresight that we are aware of has true foresight in a complete causal chain. CJYman
JT, it seems that we may finally begin to make progress in the discussion. Foresight = Awareness of future goals which do no yet exist (whether those goals are the result of free will or any form of determinism or not) Application of foresight = The generation of those goals by harnessing law and chance to engineer a solution to those goals. So, have you ever wanted to go to a location that you have never been (obviously are not at in the present)? Did you then pull out a map and plan your present actions based on a future destination? This is a perfect example of teleology in action. Remember when I discussed teleology above in comment #136 and how that is what this debate has been revolving around for centuries (determinism and free will being merely secondary questions and not fundamental to the discussion). Are you aware of a future goal (destination) which you have presently not obtained? Do you use the location of this future goal to plan your present course of action? Another example would be: Do you ever have a future goal of getting across a concept to someone? You will notice that at the point of that thought to explain the concept, you have not yet accomplished your goal [to explain the concept]. At this point that is merely a future target which does not yet exist, yet you are aware of it. You can think about it and you can begin to plan to make it a reality, which of course would be your next step -- planning. Ultimately though, you will apply your foresight to harness law and randomness *in the present* to engineer a paragraph or two on this blog in order to hopefully accomplish your goal *in the future* (convincing someone of your position or merely informing someone of your position). Still following? Have you ever done this before? Do you possess foresight? CJYman
CJYMan:
Actually, that assumption is not necessary, since even if foresight can be the result of a program, it has been shown above that program would also need previous foresight in its full causal chain (or else exist eternally). Thus foresight would breed further foresight, CSI, etc etc etc; and *only* law and chance would again not be the best explanation.
We need to be careful not to conflate causation with characterization. The question of whether a system reduces to matter, energy, and physical laws, is not the same as the question of whether it originated by matter, energy, and physical laws. By the same token, the question of whether a system is capable of foresight is not the same as the question of whether foresight is required to originate the system. Reading the above quoted paragraph, it seems to me that CJYMan is okay with the concept that a computer is capable of foresight, with the caveat that this capability must be designed into the computer by a foresighted designer. Is that correct, CJYMan, or is a computer's foresight necessarily not real foresight? R0b
Tim:
Two computers NEVER will.
If you could prove that, you would revolutionize the world of computing theory. In fact, if you could prove that humans are capable of anything that computers are in principle incapable of, you would be quite famous. More succinctly stated, your claim is that humans are not computers in the computing theoretic sense. That is a claim that nobody has been able to validate and is not taken seriously in computing theory. That's one of the reasons that the ID dichotomy of intelligence vs. chance+law is either poorly conceived or poorly stated.
Yes, never is a strong word, but I’ll rely on an application of Turing’s proof over Turing machines.
Unfortunately, the halting problem proof doesn't say that. Computers are routinely used to detect non-halting situations. The halting problem simply says that no computer can detect all conceivable non-halting situations. In fact, it's trivial to write a program that will detect (albeit inefficiently) any non-halting situation for any system with a finite number of possible states, like a chess board. Halting problem issues come into play only if the program needs to achieve a certain efficiency for any finite system of arbitrary size, or if it needs to handle infinite systems. And there is no evidence that humans can do better than computers in this regard. R0b
JYT - if you could just provide your definition of foresight now and disavow MacNeil’s characterization that would be helpful. JYT = CJYMan JT
CJYMan: Do I use foresight when reading a map? Depending on what you mean by foresight - yes. JT
CJYMan: This is about as succinct as I can be as to where I stand (quoting myself from [143]): A trivial proof that one program can generate another: I just zipped a program on my hard drive. it reduced the size from 722k to 220k. (This also indicates that functional information is “highly compressible”, BTW). So the unzip program plus a 220K random string not identifiable as anything results in a very complex functional program. You could say that the real program was “already there” in the compressed version. But there is nothing there in that random 220K string to indicate that. So why couldn’t you look out in nature prior to life and find a lot of diffuse and disparate things out there that don’t look anything like life, but were in fact transformed into life. And note that our unzip program did not have any “foresight” either. Of course, you could absolutely note that all those factors out in nature that resulted in life equated to life, just as our zipped file of random data + the unzip program equals our complex functional program. But it shows what should be obvious - that life can emerge via blind physical processes from something that does not look like life at all. And of course, you can absolutely say that this implies something existing at the beginning of the universe that exceeds the power of chance to create. However, that observation does not enlighten us as to the actual naturalistic method of how life actually emerged after the universe began. JT
Or, are you merely stating that everything can be described by a program. If so, I have no contention with that That should have been in quotes. JT
So are you trying to say that everything can be described by randomness/programming, therefore no foresight is necessary in the generation of certain patterns. I am saying everything can be described by randomness/programs, that is randomness/laws. Certainly there has been a random component to it. But the majority of it could not be explained by randomness. Considering Dawkin's weasel, we do of course understand if you have laws to the effect, "If a mutation gets me closer to a sentence about weasels then accept else reject." That randomness can accomplish quite a bit with such a set of laws. If so, then please go ahead and provide some evidence for your position and show me a program guided by no previous foresight — based on only background noise (chance) and law (arbitrary collection of laws with no consideration for future results) — and let me know when CSI is produced. At some point, Dembski or someone started address the notion of "arbitarily chosen laws" and the inability of such to accomplish anything. Such a characterization has muddied the waters. I don't think "arbitrary" laws will accomplish anything any more than pure randomness will. I don't think anyone would ever affirm that "arbitary" anything could be expected to accomplish something specific. But I don't think the laws had to be "chosen" as such. (I know you didn't use the term "chosen" yourself here, but someone did in a previous exchange with me, so I think that's the standard format for that phrase. If the implication is that someone has to "choose" the correct laws that's imposing on the discussion some metaphysical assumptions through rhetoric. Or, are you merely stating that everything can be described by a program. If so, I have no contention with that I apologize to whatever extent we've been arguing past each other (if that is the case). JT
JYT - if you could just provide your definition of foresight now and disavow MacNeil's characterization that would be helpful. JT
CJYMan [154]: "The problem is that, sure, randomness can be summoned to explain away anything … even law like behavior. The question is “what is the best explanation?” You are completely misunderstanding and mischaracterizing my point about randomness. I said there were classes of strings that would be extremely unlikely to occur by randomness. Actually I have said randomness is not an explanation. I have said many times the explanation is LAW (not "intelligence", not randomness either). JT
JT: Trevors and Abel were the ones who identified the 3-D contrasts OSC, RSC, FSC. Durston is lead author on tthe paper in which the measurements of FSC were published. GEM of TKI kairosfocus
CJYMan [155]: I am sorry if I missed your defintion of foresight- why not just repeat it, and specifically disavow Allan_Macneil's characterization. JT
CJYMan: If I can attempt to close the loop here - So if foresight (at least in some transcendent sense) is not necessary to generate any string, what are we left with? Namely the following: Conservation of Information - at least how I would use that term. I don't know how Dembski specifically defines that, But obviously if you have some process F(X) to generate string Y, then F(X) cannot be less complex than Y, and F(X) equates to Y which means you're just pushing back what needs to be explained. If this was already your understanding, then I do sincerely apologize for belaboring the obvious. But most ID'ers do seem to have an assumption about intelligence transcending chance and law or some sort of notion of transcendent conscious foresight being necessary to create FCSI. JT
P.S. JT, I have already defined "foresight" for you a few times, yet you continued to ignore it. Why should I think that things will now change? CJYman
JT, your comments re: programs. Are they merely another way of you stating what you states earlier when you stated that ... "JT: “There is no binary string that cannot be the result of pure randomness either.” ... and I responded with ... "Technically … you’re right. Practically … you’re wrong. The problem is that, sure, randomness can be summoned to explain away anything … even law like behavior. The question is “what is the best explanation?” In fact, if we took your premise here and ran with it then science, as the discovery of laws of nature would not exist as there would be no concept of law. It could just all be explained by chaotic randomness. “Planets orbiting the sun?” … easily computable as a random string; nothing to see here. Randomness did it. No true correlation to a fundamental principle at the foundation of our universe. “Chemicals bonding regularly?” … easily computable as a random string; nothing to see here. Randomness did it." So are you trying to say that everything can be described by randomness/programming, therefore no foresight is necessary in the generation of certain patterns. If so, then please go ahead and provide some evidence for your position and show me a program guided by no previous foresight -- based on only background noise (chance) and law (arbitrary collection of laws with no consideration for future results) -- and let me know when CSI is produced. Or, are you merely stating that everything can be described by a program. If so, I have no contention with that and I don't know quite enough about QM to understand the implications of many of its possible interpretations. As I have shown, it makes no difference, so are you going to answer the question or are we going to agree to disagree and not continue to broaden both of our understanding of the fundamentals of ID Theory? CJYman
CJYMan, my point is there are multitudinous and disparate methods for generating any string (whether it contains FCSI or not). How is foresight necessary for any string (if foresight means something specific). JT
KF, I want to review the Durston paper and Abels and Trevor paper in much greater detail today before getting back to you. I will stand by my previous comment that Durston has a qualititative understanding of order (revolving around the idea of something being generated by a small program) and his paper was not intended to further clarify this notion of order. JT
JT, So re: the first part of your last comment, do we just agree to disagree or are you going to answer the question and continue the discussion to discover if ID Theory has merit and/or is scientific? JT: "For any binary string (one containing functional info or not) there are an infinite number of programs that will output it" Sure. Your point is ... ? JT: "By program I would mean “program-input” meaning that crucial aspects of the solution could of course come from what is fed into the process. But there are an unending array of methods by which some binary string could be arrived at." Uhuh ... and ... CJYman
By program I would mean “program-input” meaning that crucial aspects of the solution could of course come from what is fed into the process. But just to be clear for any given binary string there are an infinite number of programs that receive no input that can output it. JT
JYT Since you may have many misconceptions of ID Theory You could of course always enumerate succinctly what you percieve to be my misconceptions of I.D. and provides quotes of mine as proof. “foresight is foresight whether it is determined to use its foresight in a certain way or not; whether it is determined to exist or not. We experience and use our foresight every day, thus we know it exists whether we have free will or not; whether the universe has a deterministic structure or not. Have you ever used a map and your foresight to plan a route to a future destination you wished to travel to?” As I stated, I can’t continue this discussion until you answer that very simple question, since it is the type of question which begins the foray into ID Theory." As I mentioned Allan_MacNeil (at TT) gave a description of foresight thaty seemed pretty bizarre:
Furthermore, it seems clear from previous discussions in this forum that IDers assume that information can be "foresighted"; that is, it can somehow anticipate future outcomes, not by "induction" from the past but by some kind of "deduction" from the future. ... whether any form of information transfer can be genuinely "foresighted" (i.e. can be modified by events that have not yet happened, rather than simply predicting future events based on events that have happened in the past).
As I also mentioned I was incredulous that ID'ers actually thought that, but then on Sunday in reviewing your comments it slowly dawned on me that you probably did think that. If by foresight you have additional baggage associated with it about conscious awareness of future states of reality or some such, and saying that that's necessary to create complex functional info I would disagree with that. And if you're just saying you think its possible for a determinstic mechanism to have this type of transcendent foresight, but that sort of transcendent foresight is still necessary, I disagree with that as well. For any binary string (one containing functional info or not) there are an infinite number of programs that will output it. By program I would mean "program-input" meaning that crucial aspects of the solution could of course come from what is fed into the process. But there are an unending array of methods by which some binary string could be arrived at. JT
One warning to you, JT. My version of ID Theory may be slightly different from the majority of ID proponents. However, that is the beauty of science -- many ideas competing and the best one winning. However, all ID hypothesis are the same in that they answer in the affirmative that the effects of intelligence are detectable and are not best explained by either chance or law. That is ... they agree on the fundamentals, which is what I wish to begin to discuss with you. CJYman
JT, actually reading through my last comment, I guess that *would* mean that I am implying that you are splitting hairs over the issue of determinism vs. non-determinism. But, as I said, that is not to say that it is not an interesting question that may arise as a secondary issue; I merely have shown that it is not fundamental to ID Theory. Since you may have many misconceptions of ID Theory, I have decided to try to explain a non-strawman variation [understood by myself, an ID proponent] to you from the ground up. This has been started in my comments #136-138 as I just stated in my last comment. CJYman
JT, I never meant to imply that you are splitting hairs over "determinism vs. non-determinism." That is a very interesting question, yet it has no use to the fundamentals of ID Theory. I explained this is comments #136-138, where I concluded with a basic summary of the key issue: "foresight is foresight whether it is determined to use its foresight in a certain way or not; whether it is determined to exist or not. We experience and use our foresight every day, thus we know it exists whether we have free will or not; whether the universe has a deterministic structure or not. Have you ever used a map and your foresight to plan a route to a future destination you wished to travel to?" As I stated, I can't continue this discussion until you answer that very simple question, since it is the type of question which begins the foray into ID Theory. CJYman
JT: A few correctives: 1] 141: On the concept of FSCI, I believe that all Durston has done would be to come up with a definition of it such that it can be shown to be at high levels in DNA and programs written by humans, but not elsewhere. So in essence he’s formalized what is already intuively obvious to everyone, that there is something different about biology. But he has not attempted to show what the probablity is of getting a binary string with FCSI by blind chance. First, if you look at the paper, you will see that his formalisation is QUANTITATIVE to the point where he has published 35 measured values for proteins and related molecules. Second, he has done so by doing studies ont eh actual observed patterns of variations in key proteins, i.e he is doing a frequentist, a posteriori probability measure, and using that to assess the information per symbol. In so doing, he is also looking at the config space as a whole. His metrics are information theoretic ones, starting with H, and extend to any other cases of digital information where there are islands of observed function and variations within the islands. Codes with self-corrective redundancies in them try simple "three-peat," majority vote codes as a simple case] are an obvious parallel case. 2] 143: “Qualitative” is a keyword indicating that an idea is not fully formed, i.e. subjective. Note that Durston’s own concept of FSCI is quantitative, i.e. they provide a specific method for measuring something, that is a reason they have a paper. You don’t write a scientific paper to address something you can’t quantify. (This is not to imply of course that being quantitative is enough to establish validity or utility on its own.) Multiple misconcepts. First, when we measure "how much," there is an implicit "of what?" in it. That is, quality and quantity are inseparable. What we do is we identify something then set up a "yardstick" for it then apply a scale to "quantify": RION -- ratio, interval, ordinal, nominal [i.e. state]. When Trevors and Abel identified and actually generated a 3-d graphical scale showing the contrasts between orderly, random and functional sequence complexity, they discussed the matter in both qualitative and quantitative terms. For instance, Shannon type metrics will scale OSC as low on complexity, wil give a rather high value to a random sequence, and will give a somewhat lower value to a functional one. This last, because it embeds some redundancy. [Recall, the Shannon metric on h gives a weighted sum on frequency of appearance of symbols, not a flat random distribution. So, it measures redundancy and patterns, giving random sequences -- thus very aperiodic -- high values,a nd giving ordered determined strings low info content. Redundancy as a key fact of life in a noisy world makes functional strings more redundant than random ones, but to contain info they must have a lot of aperiodicity.]] Similarly, Random sequences are not K-compressible. [you have to read out the exact lottery winning combination, as it is not compressible into a simpler descriptive format.] Finally, functional sequences will have: function, which is recognisable. In context, T & A et al have focussed on algorithmic functionality. Structural functionality is also a possibility: wings will not work if they are just any shape, though symmetric wings fly very well thank you contrasting the simplistic explanation that omits the circulation issue . . . angle of attack my friends, angle of attack]. Durston then built on this by providing a specific metric for FSC, based on empirics. 3] 142: he’s [Durston] claiming to be able to distinguish “order” randomness and functional info - there’s nothing there about the probability of FCSI. JT, the basic metric being used is H = SUM pi log2pi. this is a standard info theoretic metric and embeds probability. further, the context of the discussion has rto do with empirical islands of function vs the config space as a whole. Once you speak of bits, you speak of probability. Like or lump it, info in bits is based on probability, and lurking behind is statistical form thermodynamics. 4] 144: nothing in his paper is intended to demonstrate that chance and law cannot generate function. He assumes that going in. First, you ate missing the point of the cited remark:
As Abel and Trevors have pointed out, neither RSC nor OSC, or any combination of the two, is sufficient to describe the functional complexity observed in living organisms, for neither includes the additional dimension of functionality, which is essential for life [5].
In short, and ever since Orgel in 1973, we have known that informational molecules in biology are FUNCTIONAL, and that this is distinct from the characteristics of orderly and random sequences: functional, specified complexity, not random complexity. This is an observable and quantifiable fact of life. (A NOTE: You need to FIRST read for understanding, not to make rhetorical talking points. that way, you are more likely to get the point and to understand the evidence and reasoning. A good rest for understanding is accurate summary. Another is the ability to have "closure" on key ideas -- you run into the same cluster of ideas again and again, and are not being quickly caught out by novelties. I just for instance ran across a key point on Faraday's generator that I had not seen in classes or books: when the cylindrical magnet is spun instead of the disk, no emf forms. Similarly, if both are locked together and are jointly spun with no relative motion, an emf forms. Why? ANS: spinning a bar magnet on its axis does not spin the effective solenoidal B field -- it is relative, cross-ways motion of field and charge that give rise to the Lorentz force's magnetic component. [Magnetism is in significant part a relativistic effect.] And there are THREE entities involved: (a) magnet, (b) disk and (c)circuit that the disk is a part of. Spinning disk and magnet locked together still has relative motion to the rest of the loop for the circuit, so an emf will appear. [So, thanks to the didactics of a complex subtlety, there was a little hole in my knowledge about the Faraday generator. Kudos to Wiki for a good little article on it.] relevance? Hoyle's theory on magnetic braking in solar system formation, and onward extensions trying to fix the gaps in Hoyle's work. Verdict: the gaps are not closed.) Moreover, it was never a question that in principle random chance plus mechanical forces could get us to any config. The real issue is a search space one; the very same one that lies at the base of the statistical validation of the second law of thermodynamics. Namely, macrostates that move to higher entropy invariably by far outweigh the ones that lead to interesting configs. So, undirected contingency will overwhelmingly trend drastically to greater entropy, to the point of making a reliable law. In principe the O2 molecules in teh room where you sit could art random move to one end, leaving you gasping for breath, but reliably, that will not happen on gthe gamut of our observed cosmos over its lifetime. Likewise, on very similar grounds, a clutch of rocks avalanching down a hillside may possibly spontaneously form: "WELCOME TO WALES," bu the odds are again beyond merely astronomical. Thus, we see why lucky noise is not a reasonable source for FSCI. In short, probabilities emerge naturally here, they are not arbitrarily imposed. And, that answers to your selectively hyperskeptical issue on "demonstration." Durston is not trying to "prove" that random chance CANNOT get us to FSC, he is looking at the relevant probabilities and is thus drawing out the resulting information metrics in light of information theory ideas and principles. Sufficiently successfully to have published 35 peer reviewed values. 5] 141: I said at one point a while back that FCSI was a subset of CSI, but that was an assumption on my part. It seems to me now that FCSI revolves around the idea of a symbolic program, and thus a different idea from CSI. And furthermore, FCSI, contrary to CSI, would presumably not be inversely proportional to pattern complexity, as is the case with CSI. Of course, that is the problem with CSI - very simple patterns are highly unlikely as well. However, there are no formal proofs regarding the probability of getting FCSI by chance. FSCI is that subset of CSI where the specification is by observed function. Such funcitons may -- for example -- be
(a) linguistic [e.g. 143+ ASCII characters comptising contextually responsive English text], (b) algorithmic [e.g. stored programs or data strings/structures of at least 1,000 bits], (c) structural [e.g the drawing data for a wing or other functional feature of a mechanical system, or even a good old fashioned sailing ship, house or arrowhead, which will naturally take up of course more than 1,000 bits]
FOOTNOTE: I keep getting the impression that you keep looking so hard for destructive counter-examples that you have not paused to properly understand the examples. in fact, CSI began as FSCI, in a biological, origin of life studies context. Specified, organised complexity of informational character was seen as the key element in the nanomachines of he cell, by contrast with crystals [which are 3-d "polymers"] and random tars. Dembski picked up the concept, linked it to wider contgexts, and has sought to model the key factors. In so doing, he has hit on the target zone of interest in the config space as the key issue, and the onward question is to identify the zone of interest. K-compressibility, absence of which is very often tied to non-function [at random long strings, overwhelmingly will not be compressible or simply and independently describable], turns out to be a key aspect of that more general work. But also, function is not equivalent to compressibility. And, hence the significance of T & A's work. (Cf. their fig 4.) Durston extends this work, providing an empirically anchored metric and giving 35 specific values. note that Chiu, Trevors and Able are contributing authors, as well. _______________ I trust these notes will help you clear up your key misunderstandings. GEM of TKI kairosfocus
Durston As Abel and Trevors have pointed out, neither RSC nor OSC, or any combination of the two, is sufficient to describe the functional complexity observed in living organisms, for neither includes the additional dimension of functionality, which is essential for life [5]. The above is at the beginning of the Durston paper. IOW, nothing in his paper is intended to demonstrate that chance and law cannot generate function. He assumes that going in. JT
[142]: Durston, et. al in the conclusion of their paper (http://www.tbiomed.com/content/4/1/47/) say they are able to distinguish "functional information from "order" (or "OSC") and randomness, so there's a question about what their concept of of "order" is. At the beginning of the paper we're informed where they're getting their idea of "order" from: "Abel and Trevors have delineated three qualitative aspects of linear digital sequence complexity [2,3], Random Sequence Complexity (RSC), Ordered Sequence Complexity (OSC) and Functional Sequence Complexity (FSC). [emphasis added]" "Qualitative" is a keyword indicating that an idea is not fully formed, i.e. subjective. Note that Durston's own concept of FSCI is quantitative, i.e. they provide a specific method for measuring something, that is a reason they have a paper. You don't write a scientific paper to address something you can't quantify. (This is not to imply of course that being quantitative is enough to establish validity or utility on its own.) Nothing in the Durston paper is presented to further specify or quantify the notion of "order" so their claim of being able to distinguish order from function in the conclusion is misleading - all they could possibly mean is that they've applied their measure to some specific sequences that have been previously characterized as orderly, and gotten back a number consistent with what they had intended (I would say). But lets go to the paper from Abel and Trevors, as Durston says that's where his idea of "order" comes from: (http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1208958): "A sequence is compressible because it contains redundant order and patterns. Law-like cause-and-effect determinism produces highly compressible order." Note that this is how I myself characterized I.D's conception of order in [120]:
Let me talk about law for a moment. I.D. wants to associate “law” exclusively with processes characterized by algorithmic simplicity. So in terms of programs, I believe that I.D would say that laws are to be associated with very small programs. All (TM) programs are deterministic. I would say that “necessity” refers to processes characterizable by programs. It would seem to me to be completely arbitrary to say that in order to be considered “law”, programs cannot exceed a certain degree of complexity (i.e. length).
To say something is highly compressible means that is generatable by a small program. So this "order" to I.D. just means "generated by a small program". But how small? It isn't clear. Any binary string, even one that contains "functional information", is generatable by a program of some size. However, Abel and Trevors appear to deny this: Such forced ordering precludes both information retention and freedom of selection so critical to algorithmic programming and control. First of all a very small program could generate some sort of program with functional information, despite what Abel and Trevors say. But their problem with programs and law appears to be not so much related to size but rather that in their mind a program (of any size) precludes choice. IOW, they think to write a program containing functional information is not actually within the ability of a program to do. But that is nonsense. And if the intention in I.D. is to rule out what nature can achieve on its own, because of the assumption that nature is controlled by "laws" (i..e. a small program in ID's conception), this implies that the only things predictable in nature are really simple things, that anything in nature that is complex is also unpredictable. This would seem to be an absurd and completely unproductive assumption for someone to make about nature. A trivial proof that one program can generate another: I just zipped a program on my hard drive. it reduced the size from 722k to 220k. (This also indicates that functional information is "highly compressible", BTW). So the unzip program plus a 220K random string not identifiable as anything results in a very complex functional program. You could say that the real program was "already there" in the compressed version. But there is nothing there in that random 220K string to indicate that. So why couldn't you look out in nature prior to life and find a lot of diffuse and disparate things out there that don't look anything like life, but were in fact transformed into life. And note that our unzip program did not have any "foresight" either. Of course, you could absolutely note that all those factors out in nature that resulted in life equated to life, just as our zipped file of random data + the unzip program equals our complex functional program. But it shows what should be obvious - that life can emerge via blind physical processes from something that does not look like life at all. And of course, you can absolutely say that this implies something existing at the beginning of the universe that exceeds the power of chance to create. However, that observation does not enlighten us as to the actual naturalistic method of how life actually emerged after the universe began. ------------------ And a side note to CJYMan: I can try to revisit your latest post more systematically tomorrow, but here's what I remembered of I wanted to say: You say that I am splitting hairs about determinism vs. non-determinism. And I believe you imply that that it is an ongoing and presumably irreconcilable debate whether or nor nature is ultimately deterministic or not. But considerations of QM aside for example, that is just not the case. Science always tries to derive a deterministic mechanism, a program, that will account for the emergence of some phenomena in nature. They can't proceed on the assumption that, "Well phenomena at the nano-scale has proved unpredictable, so lets just assume that as a strong possibility with whatever new phenenon we encounter or whichever phenomenon we haven't explained yet." Even QM theory is expressed in terms of deterministic laws (although probabilisitic). If you want to derive probabilisitic laws pertaining to life with comparable rigor of QM theory, that's one thing. But otherwise, your implication that life may just be one of those nondeterminstic things (i.e. one that requires a nondeterminsitic, unpredicatable intelligence) is the real red herring, IMO. JT
KF: Here's the conclusion of that paper: A mathematical measure of functional information, in units of Fits, of the functional sequence complexity observed in protein family biosequences has been designed and evaluated. This measure has been applied to diverse protein families to obtain estimates of their FSC. The Fit values we calculated ranged from 0, which describes no functional sequence complexity, to as high as 2,400 that described the transition to functional complexity. This method successfully distinguishes between FSC and OSC, RSC, thus, distinguishing between order, randomness, and biological function. So, he's claiming to be able to distinguish "order" randomness and functional info - there's nothing there about the probability of FCSI. I will go back now and take a closer look at his conception of "order" because as I've said, any type of string whatsoever can be output by a program (i.e. a set of laws.) JT
KF: You actually zeroed in on a relatively important point of mine, so let me respond to you first. CJYMan's comment was, Tell me, what is the probability of the random generation of Shakespeare’s Hamlet?...Are you still not understanding that FSCI is highly unlikely to generate itself by chance." As I remarked to CJYMan (and I assume you recognize this as well) you can't talk about the probability of just getting Hamlet, as that is just one particular string. You have to talk about the probablity of getting some property by chance (e.g. compressibility). On the concept of FSCI, I believe that all Durston has done would be to come up with a definition of it such that it can be shown to be at high levels in DNA and programs written by humans, but not elsewhere. So in essence he's formalized what is already intuively obvious to everyone, that there is something different about biology. But he has not attempted to show what the probablity is of getting a binary string with FCSI by blind chance. I would remark that every binary string is a program. If the assumption is that strings exhibiting FSCI are extremely rare, what is the proof for that. As far as CSI, that conception revolved around compressibility, that is how small a description exists for a given string. Furthermore, the smaller such a description exists for a string, the more unlikely it is to occur by chance. And at one point Dembski says that we know the percentage of compressible strings is extremely small (though infinite). He doesn't have to mention that this would be a formally derived conclusion, I'll take him at his word that. But trust me, its not just a matter of common sense or assertion. I know that I said at one point a while back that FCSI was a subset of CSI, but that was an assumption on my part. It seems to me now that FCSI revolves around the idea of a symbolic program, and thus a different idea from CSI. And furthermore, FCSI, contrary to CSI, would presumably not be inversely proportional to pattern complexity, as is the case with CSI. Of course, that is the problem with CSI - very simple patterns are highly unlikely as well. However, there are no formal proofs regarding the probability of getting FCSI by chance. But anyway, as I notice now that the Durston paper itself is available from that link you supplied, I'll review it again and see if I need to revise my comments above. JT
JT, ... and one final thing. If you equate law with determinism, then you'd have to provide evidence that our universe has a deterministic structure. It seems, though, that quantum mechanics shows that our universe may not have a deterministic structure and thus your definition of "law" may not even exist. This is yet another reason why equating law with "determinism" is both non-essential and counter-productive in discussing the fundamental tenets of ID Theory. CJYman
JT: I will only add that FSCI is first a descriptive term for a phenomenon observed by OOL researchers in the 1970's and 80's, i.e. before ID existed as a scientific school of thought. Workers like Orgel, Yockey, Wickens et al. (Cf the WAC ands glossary at the top of this page -- just what exactly is imprecise and useless or confused in the descriptions and definitions given, especially in light of given examples? [Or else, you are sounding a whole lot like you are simply making closed minded objections to try to rationalise a closed mind.]) Its basic meaning is as a matter of recognising something as commonplace as contextually responsive ASCII text in English taking up at least 143 characters. Since it is a functionally specific subset of complex specified information -- and since function is recognised -- the definition of CSI and its metrics also apply. but FSCI is less difficult to specify, as we simply need to recognise function. Also, Abel , Trevors, Chiu, Durston et al have been working on Funcitonal Sequence Complexity and h have published a table of 35 values of FSC in Fits. You can inspect their method, which is based on standard approaches. All of this has been said before, and the discussions in the WAC's and glossary above give links to the details as well. So,the problem does not seem to be any real lack of clarity or definition in any reasonable sense, but that you evidently do not wish to recognise the existence of such a ubiquitous phenomenon and its well-known source, as that would be at once fatal to your case. If you doubt me on how common FSCI is, look above at the posts int his thread: are they functional in any reasonable sense of the term? Are they informational? Are they complex in the sense of beyond 1,000 bits of info storage used? the answers are utterly obvious. So, JY, it looks rather like a case of selective hyperskepticism again. A selective hyperskepticism that contradicts itself so soon as you post or read a post here and take it as more than mere lucky noise mimicking what intelligent designers are said to do. Reductio ad absurdum, yet again. GEM of TKI kairosfocus
P.P.S foresight is foresight whether it is determined to use its foresight in a certain way or not; whether it is determined to exist or not. We experience and use our foresight every day, thus we know it exists whether we have free will or not; whether the universe has a deterministic structure or not. Have you ever used a map and your foresight to plan a route to a future destination you wished to travel to? CJYman
JT, P.S. When referring to gravity as a "law," scientists are referring to the foundational principle which causes the regularity described by the mathematical equation for gravity. When a scientist says that the bonding of two atoms is caused by "law," he is referring to a bonding regularity imposed by the physical properties of the atoms. Equating "law" with "determinism" muddies the waters of the debate since intelligence could be determined to exist by a previous intelligence or intelligence may not equate to libertarian free will and is thus determined in its behavior but only if the very structure of the universe is also deterministic since intelligent behavior utilizes input gathered from our interactions with our universe. So, since an idea of determinism is not the issue which has been debated for centuries (teleology is the issue) and since determinism isn't foundational and isn't necessary to ID Theory, criticisms of determinism or non-determinism [while providing interesting philosophical discussion] is not to be focused on when debating core ID Theory principles and methodology. CJYman
JT: "...or somehow feel things the way a human does in order to create complex artifacts." It has nothing to do with feelings. As I continually state and you continually ignore, it has everything to do with foresight (awareness of future goals which do not yet exist). This issue of teleology (simply, end products determining present configurations; future possibilities influencing the present) is the core issue of the debate which has been going on for centuries. It matters not that a central figure to modern ID has certain personal thoughts about determinism or free will. The rest of the ID community is not bound to agree with those views unless it is shown that they are foundational to ID, and I have shown that they are not necessary to ID Theory. That's the beauty of free-thought ... being able to think for yourself, add your own two cents, and not be inhibited by what the "authorities" state. Equating law with determinism or non-determinism is not the issue. The scientific view of law as mathematical descriptions of regularity or organization caused by physical properties of matter is the only necessary and IMO useful definition of law as it pertains to ID Theory. If you equate law with determinism you reduce the debate to endless banter about the metaphysical definition or even possibility of determinism vs. non-determinism. Michael Polonyi (a distinguished chemist turned philosopher) discussed life as being founded upon non-physical-chemical principles -- that is, not founded upon law as scientists use the term. Again I ask, do you possess that capability to be aware of future goal that do not yet exist (foresight)? I can tell you that at the very least, engineers sure do. Honestly answering that question is the first step in the right direction to understanding the fundamentals of ID Theory. Thus, until you answer this question, I can't take your criticisms of ID Theory seriously since you have not yet understood the extreme basics. CJYman
JT: In one of the patronizing remarks directed at me earlier in the thead, Joseph provided me a reading list, but none of the sources were on line.
Umm I was asking a question to see if there was any common ground for a discussion. Now it appears that you haven't read any ID literature which means you argue from ignorance. That is never a good thing. And that is why I have read about 100 books on biology, evolution, and evo-devo. These books were not online. I either had to buy them or get them from a library. Now if you come to a blog in order to learn something about a topic then it is clear that you really are not interested in it. Joseph
CJYMan: Intelligence equates to CSI therefore it is best explained by previous intelligence and not *merely* chance and law. That is basically what the papers on active information prove In one of the patronizing remarks directed at me earlier in the thead, Joseph provided me a reading list, but none of the sources were on line. Whatever papers you're referring to, if they're online, I'll go read them now. JT
CJYMan:
That is debatable. If conscious awareness of future targets which do not yet exist (true foresight) can be the result of a program then an intelligent agent can be a program. It seems that you are just completely missing the fact that awareness of future targets (which you possess but haven’t admitted, yet) exists and separates intelligent systems from non-intelligent systems — systems controlled by only randomness and law.
AS I've indicated you were implying something about foresight, awareness and consciousness, which I didn't completely pick up on and may have made my posts 127-129, especially 127, not as responsive specifically to what you were implying. Nevertheless I do make several points in 127-129 that should make it easier to understand my own personal viewpoint, so I hope you read them all anyway. And if you have any thing else to say, I'll try to read it more carefully next time. But on the subject of consciousness, it doesn't seem to be relevant at all, no matter how important it seems to some people. I would say that a stomach and heart and various other biological systems don't seem to be conscious, although they do very complex things and goal oriented things. Consciousnness really means nothing more than what it feels like to be us. But feelings are determined by chemicals. The fact is, I think I.D. advocates don't seem to actually mention "consciousness" specifically all that often (for whatever reason) even though I think now that must be in fact what is most crucial to them - that some thing has to be conscious, or somehow feel things the way a human does in order to create complex artifacts. JT
Or foresight could be, "If I do this, then such and such will happen." And that could be a conclusion based purely on experience. Baby humans spend a LONG time learning about the physical world strictly through trial and error. And to tie a whole bunch of if-then propositions in your mind to reach a conclusion about what behavior to follow to reach some future goal is a computational activity. From my vantage point, its ridiculous to mystify foresight in any way, or allow such a notion to stand even by being noncomittal on it, and saying, "It may be a deterministic process, it may not be who knows? It doesn't matter." Sorry that just doesn't cut it for me. JT
I do need to reiterate that your primary point did apparently elude me - this idea of Foresight, involving the metaphysical awareness of future states not currently existing. I think such an idea should be disgarded. A cheetah can say, "based on how that gazelle is running, he will be at point B in 2 seconds." That's the only type of foresight that exists. Whatever foresight humans have is the same thing only conceivably to a greater degree -extrapolating further into the future base on an assessment of the dynamics of an unfolding event or on an assessment of previous states of an event. This would be an imperfect ability for anyone without unlimited information. JT
Some of my responses were not responsive enough to the main point you were making - let me try to go back and rectify that piecemeal:
CJY: That is debatable. If conscious awareness of future targets which do not yet exist (true foresight) can be the result of a program then an intelligent agent can be a program.
It seems that you are just completely missing the fact that awareness of future targets (which you possess but haven’t admitted, yet) exists and separates intelligent systems from non-intelligent systems — systems controlled by only randomness and law.
I think this is what A MacNeil was alluding to possibly, this idea in I.D. (apparently) that humans have some metaphysical awareness of something that doesn't exist yet. I don't think humans operate that way. We're able to take something that already exists and modify it in some way. JT
CJYMan [126]:
JT: “It would seem that there would be no way to distinguish any two entities said to be intelligent agents.”
Personality, talents, abilities, etc. all effect how one uses his foresight. The intelligent agent we call Beethoven used his foresight differently than how Einstein used his. There is definitely a distinguishable difference in intelligent agents.
Yes, but personality talents abilities are all due to objective deterministic causes. If someone has musical abilities presumably they do no eminate mysteriously and nonphysically and nonmechanically from some metaphysical dimension. Furthermore, whether or not someone has the ability to cultivate these "innate" gifts is dependant on objective conditions in their environment, e.g. how much money their parents make, the society they live in and so on. All of these things can be tied to objective identifiable physical things.
JT: “Also with intelligent agency, If it were possible to look at some arbitary string and say, “OK this was definitely output by Intelligent Agent X” or “This was definitely not output by Intelligent Agent X” it would imply you had a program characterizing Agent X, which would mean it was not an intelligent agent.”
Huh? Where are you getting this mumbo jumbo from? How does me recognizing the work of Beethoven mean that Beethoven did not use his foresight tocreate musical masterpieces.
My argument was a little unclear here, admittedly. I'm saying if something cannot be described via a program it cannot be described at all, and how do you distinguish two things that cannot be described. There is hardly anything in nature or human society or human behavior that someone has not attempted to simulate on a computer. Do you understand that the historical position of I.D has been that Intelligent Agency does not operate according to law, meaning that it is not determinsitic (meaning that it cannot be simulated on a computer.) If I.D. is moving away from that now, great. But being noncomittal or saying, "it really doesn't make any difference" doesn't cut it. Really, science treats everything as deterministic. I know all about QM, but my statement is essentially correct.
JT: “If a human being is characterizable as a binary string (which seems reasonable given DNA) then we know there exists some mechanism (i.e. program) that will output it.”
For the sake of argument, sure. Yet that mechanism will include foresight — whether foresight itself is a “mechanism” or not we know that it exists and is used in the generation of certain patterns. We have repeatedly shown you this and it is now up to you to provide counter examples.
No, I don't think such a mechanism would necessarily have "foresight" not at least with all the transcendental baggage I think you want to associate with the term. Does the mechanism of epigensis have foresight? What about a mechanism that takes some highly compressed and encrypted binary file and produces a beautiful painting from it? Does such a program have foresight? All I'm saying is that you could have some diffuse set of factors in the universe that resulted in life. I admit that the connotation of this is noteworthy from an I.D.standpoint, in that whatever mechanism and initial phyiscal factors involved, it would be like saying an encrypted compressed version of the mona lisa existed out there in the universe (if you catch my meaning, and you should). So this is where I would join with I.D, in remarking on the obvious fact that if we exist it means that something equating to us predated us (just as a compressed encrypted version of the Mona Lisa + the decryption program equates to the Mona Lisa). Which would be easier to get by chance - the Mona Lisa, or a compressed encrypted version of the Mona Lisa + the decryption algorithm together by chance. Do you catch my meaning now? Where I part company with I.D. is allowing to stand, even by being noncommital (as you are being) some notion of transcendtal foresight being necessary to create complex things.
I am a methodological naturalist. I have never ruled out a physical process as having caused life. For me to be in accordance with ID Theory all I need to realize is that the mechanism included foresight either as part of the mechanism [if foresight is indeed mechanistic] or outside of and influencing the mechanism.
Same here, so what would be your primary objection to naturalstic explanations? The only place the supernatural would conceivably have to come in is at the very beginning, but you could take that out as well by assuming an infinite regress of active information (as you alluded to, and if I understand what is meant by that - I note you equate it to what I was saying.) As far as naturalistic explanations the only valid objection is if someone is appealing to randomness primarily as an explanation. But you could have some set of factors out there in the universe that accounted for life, without saying those factors primarily came into existence for no reason at all- most of those identifiable factors could in essence be eternal, i.e. not coming into existence at a point in time by chance. Origin theories are very much moving away from randomness as an explanation. Maybe you don't understand what I've been saying here either.
Now, for further clarity, how are you defining the term “mechanism” as you are using it?
Basically a mechanism is everything. Everything whose behavior can be systematically described (potentially) and potentially predicted. Everything that functions in a potentially describable way - all these thing operate according to law.
But seriously now, I actually agree with you. Intelligence equates to CSI therefore it is best explained by previous intelligence and not *merely* chance and law. That is basically what the papers on active information prove. I’m glad to see you are in agreement.
So there's this added extra ingredient that can't be described by law. Then it can't be described at all, so what can science do with such a notion? A better alternative would be eternal laws at some point (or "infinite regress of information", etc.)
By “mechanism” do you mean “*only* law and chance”; or do you mean “there was cause and effect.”
Same difference: "Law", "Program", "Cause", "Determinant" Something specific that is characterizable, describable. Something that can potentially be written down.
JT: “So all you have left for an “explanation” is either randomness, or if we are to accept I.D., also possibly “Intelligent agency”. What difference does it make which one of those two you pick?”
One has foresight and the other does not. Is it really taking you this long with such convoluted arguments to figure it out?
But to me "foresight" that does not operate via a mechanism means that we cannot describe it. If we're talking about conditions at the beginning of the universe, here your foresighted "Ingtelligent Agent" does not have any mecanism any program to associate with it. Its just something that magically ouput something amazing. Why not assume the initial set of objective causal factors were eternal (as opposed to being magically materialized by an "Agent"). I've been writing for a long time today. I'm just ending it abruptly here. Sorry if I did not address something. [I wrote: "No, I don’t think such a mechanism would necessarily have “foresight” not at least with all the transcendental baggage I think you want to associate with the term." Maybe that was a mischaracterization of your position, I dont know.] JT
CJYMan [125]:
2. The issue of determined vs. non-determined is not essential to the fundamental precepts of ID as a Theory as I have briefly shown above. ... 2. Determinism is not an issue for the foundation of ID Theory as shown above.
Its a tactic obviously on boths sides to very gradually move away from an argument they're losing. That's not a bad thing really, until they eventually start denying they made the argument to begin with. So Allan_Macneil will say that no modern evo-theorists assume that RM-NS is sufficient , though ignore the fact that its still presumably presented by itself in elementary textbooks. In case you were not aware, "Intelligent Agency" has been historically presented in I.D. as something distinct from either chance or law, and law has been presented as a synonym for determinism. The concept of "choice" in I.D. as something unique to "Intelligent Agents" has also been very prominent. Dembski talks repeatedly that mechanism or necessity or law don't create contigency because their outcome is predetermined. Do you need my to hunt down some quotes for you (or do you concede the point)?
JT: “There are certain classes of strings which would be extremely improbable to occur by pure randomness, for example strings that are algorithmically compressible. Actually this is the only class of strings that I personally understand why their probability is small. If someone has demonstrated that FSCI for example is highly unlikely to occur by randomness, I am not aware of it, (not necessarily denying it though).”
…ummmmmmmm this is what KF and others, including myself have been constantly trying to explain to you. Tell me, what is the probability of the random generation of Shakespeare’s Hamlet? Now, how many quantum calculations have gone on during the universes 15 billion year history? Compare the two numbers and even from a purely mathematical POV, without further trying to gyrate and twist randomness into magically having the ability to bestow meaning, the probabilistic resources are horribly lacking. Now, just do some small scale experimenting on your own. Either take KF’s advice and start rolling some dice at a casino or have a computer randomly mutate a string of letters and compare the number of bit flips (probabilistic resources) with the bit length of a sentence when and if it materializes. Are you still not understanding that FSCI is highly unlikely to generate itself by chance.
I'm not talking about commonsensical, arguments from incredulity, like how could monkeys typing randomly get the works of Shakespeare. Do you understand that that is not a formal argument. An argument to the effect that one particular string is extremely rare and so could not occur by chance is irrelevant. Dembski talks about this principle at great length. So the odds of getting A Merchant of Venice by chance is the same as getting any other string of comparable length by chance. You have to find a property that is shared by an infinite number of strings and talk about the probablity of getting a string with that property. Dembski says that compressible strings are an extremely small percentage of all strings and compressibility is formally defined (and possessed by an infinite number of strings.) So that's why its meaningful to talk about the probability of getting a compressible string. FCSI has not been formally defined, and no proof has been given as to what percentage of strings exhibit FCSI.
1. As long as the output is regular, it can be described as resulting from law (no matter the complexity of said law).
No, something could be quite irregular and complex and be the output of some complex set of laws. Just to be clear, a program can write programs. A series of directions a program gives you is itself a program. That's a trivial example. A compiler is an extremly complex program, to take some human language artifact and optimize it and convert it into something entirely different. Programs write programs all the time.
“JT: Neither a process that is random nor a process that is an Intelligent Agent has any sort of description at all.”
1. Yes and no. Yes, randomness is not so much a process as it is a lack of causal description. But, no, randomness does have a statistical mathematical description. 2. No and no. A process that is an intelligent agent is described as a process which is aware of and can generate future targets which do not yet exist and then harness chance and law to engineer a solution to accomplish that goal. This process can be mathematically described as CSI or active info.
That someone can write a fairy tale doesn't mean the thing they're describing is real. (My intention is a substantive point here, not insult, btw). The historical position of I.D. is that an intelligent agent is something distinct from law or program or deterministic cause. That's what I was addressing. If a program doesn't exist to characterize something, then your talking about something that's make believe. You can describe a perpetual motion machine too.
JT: “You could never have a course, “Introduction to Intelligent Agents” that porported to describe how intelligent agents function.”
I disagree. One can already describe how AI functions. I see no reason why the time will not come in the future when we will be able to describe how conscious intelligence functions. Penrose and Hameroff may even have gotten us moving in the right direction. The strength of a conscious experience may indeed be described as E=h/t.
No, I'm talking about an Intelligent Agent as has been historically described in I.D. Are you assuming that I don't think concious experience could not be expressible in terms of physical laws? How often do you read my posts? Once again, I'm using "Intelligent Agent" in the sense that I.D. has historically used that term.
You don’t have foresight?????? Please don’t ever go into engineering for the sake of the safety of humanity.
Try to understand: I think all programs have foresight, to one degree or antother. I think a human being has to be treated as a complex chemical program. I do not believe in I.D.'s magical "Intelligent Agent" that operates via some method that can't actually be characterized or described via a program and magically produces foresight and goals. You're agnostic or noncommital on whether I.D. can be a program, but that says nothing about the historical position of I.D. If I.D. is gradaully transitioning away from it, now that they see the implications, that's great. But people like KF for example (and please correct me if I'm wrong) still hold to the historical view of ID regarding "Intelligent Agency". JT
CJYMan [124] [responses to 125 and 126 forthcoming.] [CJYMan: All I want to do is communicate, and if any of the following becomes derisive at some point, its just in the heat of the moment. I just don't have time to go back and reedit everything. My views do actually converge with I.D at certain points. I just don' t think they're stating it the correct way. But actually some of them (not including you evidently) would be violently opposed to the idea of agents operating according to a program, or any potential threat to the notion of free-will.] [But anyway, the end of my post in 120 became a little confusing evidently, and I will try to subsequently clear that up. Stick with my ideas as long as you care to. If I'm not successful in getting my points across I have only myself to blame.]
JT: “I was always under the impression that anything that’s a program cannot be intelligent agency in the I.D. conception.”
That is debatable. If conscious awareness of future targets which do not yet exist (true foresight) can be the result of a program then an intelligent agent can be a program.
Well, it provides I.D with a moving target, if proponents don't even agree among themselves on this point. As I indicate later, if an intelligent agent can be a program, then an intelligent agent can be determinstic. Frankly, the only type of agent I would be personally interested in is one that could actually be described (i.e. one for which a program potentially existed to characterize its behavior.)
It seems that you are just completely missing the fact that awareness of future targets (which you possess but haven’t admitted, yet) exists and separates intelligent systems from non-intelligent systems — systems controlled by only randomness and law.
If law is a synonym for determinism, then any program is law. And if an intelligent agent is a program (which you've agreed to) then an intelligent agent operates according to law. [Its difficult for me to discuss this, because you imply its possible some agents might be programs and the traditional view point in I.D. has been that agents are not programs.] The types of laws we tend to associate with nature are generally expressed compactly. At least one reason for this is that it is always a goal in science to express things as efficiently as possible, to strip out all extraneous verbiage, to distill, i.e. to express the essence of. But it also has to do with narrowing the focus of attention, so you can be talking about something specifically. Furthermore, the reason that laws of nature would be traditionally expressed as mathematical formulas, as opposed to algorithmically, is that computer science has only been around for 75 years or so, and prior to that all they had was math. That's a rough generalization but essentially correct. Of course they did have logic in Ancient Greece, granted. But the point is, there is a certain limitation of expression when dealing strictly with mathematical formulas, and would be another reason why laws of nature appear simplisitic. Mathmetical formulas are only a small subset of "Law". "Law" would be any characterization of how something operates. On what basis would we assume that nature can only be described by simple laws. Its like saying that nature is only deterministic in really simple things, that to whatever extent it is "complex" it is also unpredictable. This would be philosphical assumption, and a very odd one at that, from my vantage point. If nature is simple, why does it take an incredibly large program to simulate the weather for example? To operate according to law means to have a description and vice versa.
Well, technically, even statistical randomness can be deterministic if one is able to calculate all initial conditions to infinite precision. That’s where the problem lies, though. This issue of determinism vs. non-determinism has been hotly debated as a metaphysical issue for quite some time now and is actually not essential to this fundamentals of this debate. So out with the herrings …
I don't know where the red herrings are. Chance is something assumed in I.D. literature. So, in your mind the only things might be law and intelligent agency.
JT: “I was always under the impression that anything that’s a program cannot be intelligent agency in the I.D. conception.”
That is debatable. If conscious awareness of future targets which do not yet exist (true foresight) can be the result of a program then an intelligent agent can be a program.
JT: “So I’ll assume that for the moment.”
Actually, that assumption is not necessary, since even if foresight can be the result of a program, it has been shown above that program would also need previous foresight in its full causal chain (or else exist eternally). Thus foresight would breed further foresight, CSI, etc etc etc; and *only* law and chance would again not be the best explanation.
What makes you think a program cannot have foresight (Oh wait, you're undecided on this point). And once again any program is deterministic, not just "law" or simple laws. But a simple program can have simple goals. A complex program can have complex goals, a complex saved internal state and governed by complex processes. How much memory a process has is one determinant of what sort of foresight it can have. If it can save a lot a lot of data from the external world, it can recognize complex scenarios in the external world if they arise again. Actually, though I've seen Allan_Macniel characterize I.D.'s conception of foresight as something quite bizarre, operating outside of space-time or something, and I suggested that no one in I.D. thought that, and in reply to me I believe he posted a list of sites from google, and I never checked into it. But when I think of foresight, I think of a program having a goal of some sort, and possibly being able to simulate the external world in memory such that it can predict [perhaps imperfectly] future states of the world and then navigate towards those that are consistent with some goal. But all this can be realized by a program, i.e. some deterministic process, i.e. something that operates according to complex laws. In fact, I wouldn't know what approach to take in analyzing the concept of foresight, other than a computational approach. The program is the formalism for characterizing how some process operates. The program supersedes all other forms of description - be it math or a natural language or whatever. I believe that naturalist would say that nature doesn't need a certain amount of memory in order to simulate itself. And I know that evo-theorists would be the ones who say evolution has no goals, or foresight, but I would say that's not actually possible. But not because the creation of humans requires something outside nature or law to explain it. Rather because there will be foresight inherent in any determinstic process that causes something to happen, i.e. if f(x) results in y, then f(x) is just y in another form. When you talk about a human being having a goal, that goal will be expressed in his brain as a certain complex configuration of chemicals. A goal in a computer would be a series of electrical impulses. But these internal artifacts map to something in the external world. In a deterministic universe, if everything is predetermined then humans have always existed for eternity as a concept. Determinism is I.D's friend. [the above discussion would be relevant as well to your closing remarks in 124 about foresight, so I won't adress those specifically]. JT
To JT (part 3) JT: "It would seem that there would be no way to distinguish any two entities said to be intelligent agents." Personality, talents, abilities, etc. all effect how one uses his foresight. The intelligent agent we call Beethoven used his foresight differently than how Einstein used his. There is definitely a distinguishable difference in intelligent agents. However, even if that were not the case, this in no way sets back the fundamental position of ID Theory: 1. Intelligence exists. 2. Intelligence is necessary to produce certain effects. 3. Law and chance are not adequate explanations for those effects. JT: "Also with intelligent agency, If it were possible to look at some arbitary string and say, “OK this was definitely output by Intelligent Agent X” or “This was definitely not output by Intelligent Agent X” it would imply you had a program characterizing Agent X, which would mean it was not an intelligent agent." Huh? Where are you getting this mumbo jumbo from? How does me recognizing the work of Beethoven mean that Beethoven did not use his foresight to create musical masterpieces. Is chance or law a better explanation? If so, please provide evidence that background noise and an arbitrary collection of laws will produce that type of music. JT: "So there’s no way to distinguish various intelligent agents from each other." I think you are confusing yourself as much as you are confusing me. First, it seems that you are arguing that if there is no way to distinguish intelligent agents then intelligence is no different than randomness since there is also no way to distinguish between different samples of randomness. Again a fallacious argument, similar to your previous fallacious argument: Intelligence and randomness are subsets within the larger set of "not being able to distinguish between sub-sub-sets" therefore the subsets are equal. That most definitely is not necessarily true. Draw a Venn diagram and you'll see why. Then you seem to argue that if you can distinguish between intelligent agents, they therefore reduce to programs. 1. It doesn't matter. 2. HUH??? Mind running that logic by me again? JT: "I.D. will make comments to the effect, “We do know that intelligent agents routinely output [fill in the blank] FCSI,CSI, symbolic programs, etc. So if we see such things and don’t know of a mechanism (i.e. program) that caused it, we are justified in assuming it was output by intelligent agency.” Mechanism is a secondary question as shown above in the gravity and Big Bang question. Here, I'll simplify things for you a bit so that you don't have to provide an Olympic's worth of mental gymnastics and gyrations. 1. Does it have foresight? 2. Does it use its foresight to produce certain effects? 3. Are those effects plausibly explainable in terms of *only* law and chance absent previous foresight? 4. Can foresight be caused by *only* law and chance? JT: "If a human being is characterizable as a binary string (which seems reasonable given DNA) then we know there exists some mechanism (i.e. program) that will output it." For the sake of argument, sure. Yet that mechanism will include foresight -- whether foresight itself is a "mechanism" or not we know that it exists and is used in the generation of certain patterns. We have repeatedly shown you this and it is now up to you to provide counter examples. JT: "So its not clear on what basis you could rule out some physical process in the universe that could have caused life." I am a methodological naturalist. I have never ruled out a physical process as having caused life. For me to be in accordance with ID Theory all I need to realize is that the mechanism included foresight either as part of the mechanism [if foresight is indeed mechanistic] or outside of and influencing the mechanism. Now, for further clarity, how are you defining the term "mechanism" as you are using it? JT: "If you have some program-input f(x) that outputs y then f(x) equates to y. So any mechanism-input that is proposed as an explanation for y equates to y. So if you have some disparate, diffuse set of numerous factors existing out there in the universe that collectively resulted in life, that set of factors would still equate to life. As far as I know the significance of this eludes everyone in this forum but me." It must be because you are just sooooo intelligent ... er ... or not?!?!?! ... or maybe that statement equates to randomness or maybe law ... was it determined ... or maybe your thoughts which created that statement are only tricking us into thinking it came from JT when really it came from a different program which equates to randomness ... oh really?!?!? But seriously now, I actually agree with you. Intelligence equates to CSI therefore it is best explained by previous intelligence and not *merely* chance and law. That is basically what the papers on active information prove. I'm glad to see you are in agreement. JT: "But to continue, Say that some binary string exists and that somehow it can be ruled out that it was the output of a mechanism. (Consider for example conditions at the beginning of the physical universe)." By "mechanism" do you mean "*only* law and chance"; or do you mean "there was cause and effect." JT: "So all you have left for an “explanation” is either randomness, or if we are to accept I.D., also possibly “Intelligent agency”. What difference does it make which one of those two you pick?" One has foresight and the other does not. Is it really taking you this long with such convoluted arguments to figure it out? JT: "Actually there is a third alternative: the binary string in question could have always existed (and thus need not have been “caused” by randomness or “Intelligent Agency”.)" Ah, yes ... the infinite regress of active information. But of course, you do realize that this would mean that there is an eternal bias in the very nature of reality for the production of life, evolution, intelligence, and all the patterns which are observed to be hallmarks of intelligent (foresighted) agents and are not properly/fully explained by either chance or law. There are some pretty interesting implications for that line of thought, but yes it is, alongside ID Theory the only other really valid scientific option. One more thing, though. It seems as if scientists would rather provide explanations for patterns through observed cause and effects relationships rather than just saying "We have no idea what causes it ... that's just the way it is." If the foundation is intelligence, at least this provides a closed intelligence - information - intelligence loop. If the foundation is eternal active info, we have no real explanation for our universe, life, evolution, or intelligence. It's neither law nor chance nor intelligence. If you want to take that as your idea and run with it go ahead, just don't fool yourself into thinking that you've somehow overturned ID Theory or shown it to be incoherent or unscientific -- especially when you do not see your arguments as the result of intelligence (foresight, logical planning, goal oriented structuring, etc.) To everyone else reading this ... if anyone is following anymore, I apologize for taking up so much room. It's just that I think there may be hope that JT will finally understand the basics of ID Theory. CJYman
To JT (part 2) JT: "So, if intelligent agency is one type of nondeterminism then it cannot be described via a program. This is in contrast to what CJYMan has said I believe, that AI programs can be examples of intelligence agency. (KF, I’m not recalling at the moment where you stood on this.)" 1. The jury is still out on the issue of determinism vs. non-determinism. Yet, I will point out that quantum mechanics seems to have brought into the discussion the ability for there to exist a truly non-deterministic foundation to our universe. 2. The issue of determined vs. non-determined is not essential to the fundamental precepts of ID as a Theory as I have briefly shown above. 3. My thoughts on foresight, AI, and programs has been covered in my last three comments and where KF has quoted me above. JT: "There is no binary string that cannot be output by a program. In fact for any given binary string, there are an infinite number of programs that will output it." Fair enough. JT: "There is no binary string that cannot be the result of pure randomness either." Technically ... you're right. Practically ... you're wrong. The problem is that, sure, randomness can be summoned to explain away anything ... even law like behavior. The question is "what is the best explanation?" In fact, if we took your premise here and ran with it then science, as the discovery of laws of nature would not exist as there would be no concept of law. It could just all be explained by chaotic randomness. "Planets orbiting the sun?" ... easily computable as a random string; nothing to see here. Randomness did it. No true correlation to a fundamental principle at the foundation of our universe. "Chemicals bonding regularly?" ... easily computable as a random string; nothing to see here. Randomness did it. JT: "There are certain classes of strings which would be extremely improbable to occur by pure randomness, for example strings that are algorithmically compressible. Actually this is the only class of strings that I personally understand why their probability is small. If someone has demonstrated that FSCI for example is highly unlikely to occur by randomness, I am not aware of it, (not necessarily denying it though)." ...ummmmmmmm this is what KF and others, including myself have been constantly trying to explain to you. Tell me, what is the probability of the random generation of Shakespeare's Hamlet? Now, how many quantum calculations have gone on during the universes 15 billion year history? Compare the two numbers and even from a purely mathematical POV, without further trying to gyrate and twist randomness into magically having the ability to bestow meaning, the probabilistic resources are horribly lacking. Now, just do some small scale experimenting on your own. Either take KF's advice and start rolling some dice at a casino or have a computer randomly mutate a string of letters and compare the number of bit flips (probabilistic resources) with the bit length of a sentence when and if it materializes. Are you still not understanding that FSCI is highly unlikely to generate itself by chance? JT: "Note that if you’re talking about compressibility, that would definitely include strings exhibiting the sort of pattern-simplicity that I.D. seems to equate to “Law”." Yes, that is what law is -- a mathematical description of regularity. JT: "It would seem to me to be completely arbitrary to say that in order to be considered “law”, programs cannot exceed a certain degree of complexity (i.e. length). It seems that if someone says that laws only refer to very simple programs, and that its possible for a very complex program to be an Intelligent Agent, then that means its possible for an Intelligent Agent to be determinstic. But I am going to say that any sort of program can be characterized as “Law”, not just really simple programs." 1. As long as the output is regular, it can be described as resulting from law (no matter the complexity of said law). 2. Determinism is not an issue for the foundation of ID Theory as shown above. 3. A program will definitely have a "Law" component to it in that it will operate according to set laws once it is fashioned correctly. However, if the core of the program is a set of specified instruction beyond all probabilistic resources that outputs function as opposed to mere regularity; and if the organization of those instructional states are not themselves defined by law (regularity) nor caused by the physical properties of those states, then the core of the program is neither caused by nor defined by law. Thus, the program has a non-lawful component to it. It is not "merely" law. 4. So yes, every program can be characterized by law, but not every program can be characterized by *only* law. JT: "Neither a process that is random nor a process that is an Intelligent Agent has any sort of description at all." 1. Yes and no. Yes, randomness is not so much a process as it is a lack of causal description. But, no, randomness does have a statistical mathematical description. 2. No and no. A process that is an intelligent agent is described as a process which is aware of and can generate future targets which do not yet exist and then harness chance and law to engineer a solution to accomplish that goal. This process can be mathematically described as CSI or active info. JT: "You could never have a course, “Introduction to Intelligent Agents” that porported to describe how intelligent agents function." I disagree. One can already describe how AI functions. I see no reason why the time will not come in the future when we will be able to describe how conscious intelligence functions. Penrose and Hameroff may even have gotten us moving in the right direction. The strength of a conscious experience may indeed be described as E=h/t. However, that also is debatable, yet is not necessary to the fundamental position of ID Theory. Just as we can detect the effects of gravity without knowing what causes gravity (Newton before Einstein) and detect the effects of and infer back to a Big Bang; so we can also experience the existence of foresight, observe its effects, and infer from its effects to its previous existence without yet knowing how it functions. The only other necessary part of the hypothesis would be that law and chance *absent* previous foresight will not generate foresight. Any takers on counter examples? JT: "And you can’t characterize an Intelligent Agent on the basis of it output, because that is what a program is - a concise characterization of the output of a process." 1. No one is characterizing intelligence on the basis of its output. Intelligence is characterized by its foresight. It is *detected* on the basis of its output. Or are you trying to say something different? 2. What you just stated makes no sense. I see no logical flow from one concept to the next, not do I see a logical flow from this statement to the rest of your argument. JT: "... but I don’t think a human being for example is an intelligent agent. (OK Tim, KF, CJYMan, et. al. time to go into paroxysms over that last remark.))" You don't have foresight?????? Please don't ever go into engineering for the sake of the safety of humanity. CJYman
Hello JT, you've left me much to respond to, so I will break my comments down into sections. JT: "A lot of you seemed really irritated by my comment that I.D’s conception of intelligence is indistinguishable from randomness." Well, I for one get a little irritated when one tries *continuously* to pass assertion and logically fallacious arguments as some type of "criticism." I mean, I can understand making a mistake in a logical argument ... but seriously ... to continue to push the fallacy after it has been exposed countless times ... sorry man; I'm sure that would even irritate you. It seems that you are just completely missing the fact that awareness of future targets (which you possess but haven't admitted, yet) exists and separates intelligent systems from non-intelligent systems -- systems controlled by only randomness and law. JT: "I was always under the impression that anything that’s a program cannot be intelligent agency in the I.D. conception." That is debatable. If conscious awareness of future targets which do not yet exist (true foresight) can be the result of a program then an intelligent agent can be a program. JT: "So I’ll assume that for the moment." Actually, that assumption is not necessary, since even if foresight can be the result of a program, it has been shown above that program would also need previous foresight in its full causal chain (or else exist eternally). Thus foresight would breed further foresight, CSI, etc etc etc; and *only* law and chance would again not be the best explanation. JT: "I believe Atom said that nondeterminism could be either randomness or intelligent agency." Well, technically, even statistical randomness can be deterministic if one is able to calculate all initial conditions to infinite precision. That's where the problem lies, though. This issue of determinism vs. non-determinism has been hotly debated as a metaphysical issue for quite some time now and is actually not essential to this fundamentals of this debate. So out with the herrings ... Foresight could be determined to come into existence, libertarian free will may be non-existent, our universe from beginning to end may be wholly determined yet ID Theory would still stand as long as: 1. Foresight exists. 2. Foresight is necessary in the creation of certain patterns. 3. Merely law and chance *absent foresight* will not best explain, nor practically generate said patterns. 4. Foresight itself is not caused by *only* law and chance absent previous foresight. CJYman
P.S. artificial foresight (the mimicking of results provided by true foresight) always derives from, but is not equal to, true foresight. There is absolutely no awareness of future goals involved. Thus, the results of AI are not due to the AI but due to the foresight which programmed it, thus the results are indirectly the output of true foresight. CJYman
I have to to go to work today, but I will be back to continue to repeat the points I've made which ROb and JT continually and blatantly ignore. Furthermore, when I return I will show that KF is correct that I agree on the substance of the issue of foresight that matters in this debate. For now, it will suffice to state that KF in #121 has adequately represented my views on the matter of foresight. CJYman
Re Rob, 118: On points: 1] Halting prob vs knowing when to call it quits. Are you trying to tell me that people don't know -- providing they have common sense -- when to call it quits? (Or do you expect to be able to show a fMRI in which you see a little flowchart appear in the brain image with a solution to the halting problem for algorithms?) Here's heuristic for you: if you are deep in a hole and need to get out, stop digging in deeper. (In this case, into a reductio . . . ) Notice onlookers: this functions semantically and metaphorically -- ways that algorithms simply do not; but we do because precisely we have real intelligence and foresight and insight and imagination. All of which are self-evidently plain to the point where the rejection lands the objector in repeated absurdities. 2] since ID theorists disagree on whether computers can have foresight Last I checked, CJY boils down to saying that the smarts in a computer are put there by the programmer, i.e they are not native to the AI but to the programmer. That is, he is not substantially different from me or Tim. To check, I keyed in foresight into my find feature, and this at 77 is a good excerpt on CJY's basic view:
Foresight is self evident, since we [i.e known, conscious, self-aware intelligent agents] all experience it every day. We use our foresight to imagine a future goal that does not yet exist and then work to produce that goal. In many cases, these goals are neither best definable by law nor by randomness.
In 86, he goes on to make a remark that is probably being taken out of context:
Second, AI systems are called *artificial intelligence” for a reason. They model future possibilities and work toward a future target which does not yet exist, as in the chess program example. Thus, they have the most rudimentary form of foresight without being conscious of their foresight. They have artificial, as opposed to conscious or “real” foresight. Finally, AI systems are more than just law and chance. As already explained, and completely ignored by yourself, AI fundamentally consists of programming . . . KF unfortunately had to remind you [JT] of the very simple fact that the programming necessary for an AI system comes from a programmer using his foresight (one aspect of intelligence).
So, CJY EXPLICITLY agrees with me that there is no inherent foresight in AI systems, just what is written into a program per its algorithms. In short here is no gotcha there, apart form taking words out of context and twisting them into what they plainly do not mean; he better to project to unwary onlookers the idea of a disagreement on substance between ID proponents. (50c gets you $5 that we will hear of this utterly irreconcilable disagreement that how how ID is blah blah blah again. (Just like with so many other artfully constructed strawmen based on quote mining and used against ID. Sorry if I sound disgusted at such distortions and the way they have been used to mislead and manipulate, but I am. For good reason.) Rob, you are beginning to sound here like the squid that squirted ink to get away behind. 3] Disagreement resolved. Thank goodness. Not so fast. Computers execute under the constraints of physics, but their structure, function and information content reflect directed contingency. A point hat is decisive on the difference between chance and contingency and he onward distinction between undirected and directed contingency. And at no point have I or anyone else in this thread supportive of ID said any differently. Recall, most of us work with PC hard and/or software so we know. 4] your verbose ad nauseam repetition of points I’ve already addressed. Please excuse me if I take a break from you for awhile. Translated: running away behind a cloud of ink, laced with ad hominems, onlookers; while pretending to have cogently answered the issues on the merits. In fact -- just scroll up to check -- Rob has not COGENTLY addressed the issues he has been faced with, not only from the undersigned but from several others. Also, if short remarks are made, they are wrenched and abused rhetorically, if longer more methodical ones are made, they are ducked and the author is attacked. That does not sound like the approach of someone who knows he has a serious case on the merits. Cho man, do betta dan dat! Shaking the head sadly, GEM of TKI kairosfocus
Rob, appreciative of your comments as always (not in defense of me, just in general.) Collin [26]:, if you really want to learn something, read Rob's posts, not mine. ------------- A lot of you seemed really irritated by my comment that I.D's conception of intelligence is indistinguishable from randomness. And I have gone through the posts of the last couple of days and the following is what I want to remark: (And if you're looking for absolute clarity and precision in terminology goto Rob's posts, not mine. ) Le'ts talk about binary strings. A binary string could be generated by pure randomness, as could be modelled by flipping a coin multiple times. A binary string could also be generated by some program. I'm thinking of a program that takes some arbitrary binary string as input and halts at some point and produces a binary string as output. Note that the program itself is also a binary string. Let's also consider the program and a specific input to it together as a single entity. (And for the computer its running on, we're imagining some very simplistic TM-equivalent device.) And then according to ID, there is a third type of causality called "intelligent agency" that can output binary strings. (Although, actually I'm not sure I would consider randomness a cause as such. Its certainly not an explanation.) I was always under the impression that anything that's a program cannot be intelligent agency in the I.D. conception. So I'll assume that for the moment. I believe Atom said that nondeterminism could be either randomness or intelligent agency. So, if intelligent agency is one type of nondeterminism then it cannot be described via a program. This is in contrast to what CJYMan has said I believe, that AI programs can be examples of intelligence agency. (KF, I'm not recalling at the moment where you stood on this.) Now for clarity, lets review: There is no binary string that cannot be output by a program. In fact for any given binary string, there are an infinite number of programs that will output it. There is no binary string that cannot be the result of pure randomness either. There are certain classes of strings which would be extremely improbable to occur by pure randomness, for example strings that are algorithmically compressible. Actually this is the only class of strings that I personally understand why their probability is small. If someone has demonstrated that FSCI for example is highly unlikely to occur by randomness, I am not aware of it, (not necessarily denying it though). Note that if you're talking about compressibility, that would definitely include strings exhibiting the sort of pattern-simplicity that I.D. seems to equate to "Law". And then according to I.D., there is no binary string that cannot be output by intelligence. Let me talk about law for a moment. I.D. wants to associate "law" exclusively with processes characterized by algorithmic simplicity. So in terms of programs, I believe that I.D would say that laws are to be associated with very small programs. All (TM) programs are deterministic. I would say that "necessity" refers to processes characterizable by programs. It would seem to me to be completely arbitrary to say that in order to be considered "law", programs cannot exceed a certain degree of complexity (i.e. length). It seems that if someone says that laws only refer to very simple programs, and that its possible for a very complex program to be an Intelligent Agent, then that means its possible for an Intelligent Agent to be determinstic. But I am going to say that any sort of program can be characterized as "Law", not just really simple programs. Note that every program has a description namely the program itself. You can allude to some program, because that program itself is a binary string (And once again, I think its helpful to consider the input to the program as part of the program). If a process is determinstic, you could say, "Here is a description for that process", and point to a binary string which is a program characterizing that process. Neither a process that is random nor a process that is an Intelligent Agent has any sort of description at all. A random process is not characterizable by a program, and thus does not have a description. Same with Intelligent Agency. This means that you cannot have an english language description of something purported to be an intelligent agent. You could never have a course, "Introduction to Intelligent Agents" that porported to describe how intelligent agents function. No such description is possible, because no program describes an intelligent agent. And you can't characterize an Intelligent Agent on the basis of it output, because that is what a program is - a concise characterization of the output of a process. Note also that there is really only one type of randomness. (We're not modelling randomness mixed with determinism here, btw.) If process A is random and generates strings, and process B is random and generates strings, process A and process B are indistinguishable. You don't have varieties of pure randomness. It would seem that would have to be the case with Intelligent Agency as well. (For this post at least, I will refrain from putting quotes around "intelligent agency", but I don't think a human being for example is an intelligent agent. (OK Tim, KF, CJYMan, et. al. time to go into paroxysms over that last remark.)) It would seem that there would be no way to distinguish any two entities said to be intelligent agents. With two purely random processes there is no pattern by which you can distinguish their output. Also with intelligent agency, If it were possible to look at some arbitary string and say, "OK this was definitely output by Intelligent Agent X" or "This was definitely not output by Intelligent Agent X" it would imply you had a program characterizing Agent X, which would mean it was not an intelligent agent. So there's no way to distinguish various intelligent agents from each other. And as I acknowledged, some people here want to say very complex programs (but not simple ones) can be intelligent agents. That appears to be a minority position in I.D. Most of you want to say Agency is a flavor of nondeterminism. (And Atom, I believe at one point you said a nondeterminstic FSA modelled a human, but a nondeterminstic FSA would be chance+ necessity.) Not sure here how I want to continue this discourse at the moment. But hopefully it starts to provide a framework for people in this forum to to understand where I'm coming from. Actually I do remember what more I need to say: I.D. will make comments to the effect, "We do know that intelligent agents routinely output [fill in the blank] FCSI,CSI, symbolic programs, etc. So if we see such things and don't know of a mechanism (i.e. program) that caused it, we are justified in assuming it was output by intelligent agency." But as I said there isn't any string that can't be output by a mechanism. I.D has not provided a basis for establishing that its more probable that some string was output by Intelligent Agency than a mechanism, namely because Intelligent Agents don't have descriptions. However, If you have two programs, and one is a lot more complex than the other, you might have the basis for saying the more complex one is more likely to have generated some particular type of complex binary string. And once again, you can't characterize an Intelligent Agent on the basis of it output, because that is what a program is - a concise characterization of the output of a process. If a human being is characterizable as a binary string (which seems reasonable given DNA) then we know there exists some mechanism (i.e. program) that will output it. So its not clear on what basis you could rule out some physical process in the universe that could have caused life. Now what is of note here, and in fact I have noted it many, many times without acknowledgement, I might also add it being one of the very few things I mention repeatedly, is the following: If you have some program-input f(x) that outputs y then f(x) equates to y. So any mechanism-input that is proposed as an explanation for y equates to y. So if you have some disparate, diffuse set of numerous factors existing out there in the universe that collectively resulted in life, that set of factors would still equate to life. As far as I know the significance of this eludes everyone in this forum but me. But to continue, Say that some binary string exists and that somehow it can be ruled out that it was the output of a mechanism. (Consider for example conditions at the beginning of the physical universe). So all you have left for an "explanation" is either randomness, or if we are to accept I.D., also possibly "Intelligent agency". What difference does it make which one of those two you pick? Actually there is a third alternative: the binary string in question could have always existed (and thus need not have been "caused" by randomness or "Intelligent Agency".) JT
"The Halting Problem" Alan Turing proved in 1936 that a general algorithm to solve the halting problem for all possible program-input pairs cannot exist. We say that the halting problem is undecidable over Turing machines. (from Wikipedia) ROb, if I read you right, you said that my argument was fallacious, but I gave an example where humans showed foresight in agreeing to call a draw in a king on king chess match. Somehow the humans were able to do something no computer could do, ever. They were able to innovate, or create. I may not have been clear when I wrote that the two computers playing each other had ALSO traded down to king v. king. In that case, though, they have no foresight whatsoever in terms of the outcome of the game; they chase and chase and chase. They never checkmate because, well for one thing, they are precluded from occupying adjacent squares (not that that would do any good). Now, I don't know as much about computers as I do about chess, but I think my little scenario holds up just fine. Two absolute beginners in chess will, without any coaching, come to this conclusion. Two computers NEVER will. Yes, never is a strong word, but I'll rely on an application of Turing's proof over Turing machines. Hmm. This suggests that those two humans are somehow qualitatively different than the computers; perhaps they are not merely physical embodiments of Turing machines. Wait a minute! That almost speaks of some type of agency, something final and outside of strict materialism -- a divine toe in the door? Tim
kairosfocus:
A computer has nothing resembling foresight, as Tim has just exemplified.
Tim's example was fallacious, as I showed, unless you think that intelligent agents can solve the halting problem. Is that your position? And since ID theorists disagree on whether computers can have foresight (see CJYMan for a position contrary to yours and Tim's), I'll assume that ID theory hasn't resolved the question.
First, do you understand computer architecture, classically “the machine language’s view of the system”?
Not that it matters, but I have an MS in electrical engineering, specifically computer architecture. You can use as technical language as you like on the subject.
It is simply a machine that mechanically processes programmed instructions at bit level based on arrangements of logic gates and registers and clock and control signals, to give controlled predictable outputs.
Which is exactly what I mean when I say that computers operate according to physical laws, or chance and necessity. Disagreement resolved. Thank goodness.
If you do not understand something that basic, sorry, but you are in no position to seriously discuss the issues you have raised. Instead you need to do some 101 level reading.
Given that I agree with you on the physical operation of computers, and with CJYMan on whether computers can have foresight, your condescension seems ill-advised. And annoying, as is your verbose ad nauseam repetition of points I've already addressed. Please excuse me if I take a break from you for awhile. R0b
ROb: "A common point of criticism by the scientific community is ID’s vagueness (cf “written in jello”). IMO, the definitions offered in the glossary lend weight to that criticism." 1. http://cjyman.blogspot.com/2008/02/is-dr-dembskis-work-written-in-jell-o.html 2. Which definitions do you have a problem with and why? You do realize that the same can be said of any scientific theory -- words can only be defined so much until you are left with a circularity of definitions and/or some concepts which are not necessarily clear and may seem to border on the metaphysical. ie: define "force" and define "matter" or looking in the direction of evolution, define "random" in "random mutation and natural selection." Oh, and please define these terms without relying on Webster. ROb: "They’re great as a basis for endless philosophical debates, but they don’t work as a basis for a scientific theory. The scientific community is waiting for a technical treatment of ID theory, not hand-holding explanations by way of examples and Webster definitions." 1. I've already dealt with why examples -- observations -- are so important above. You have not countered with anything but assertion. 2. CSI and active info provide excellent technical treatment, as does any research into intelligence -- the modeling of the future and generation of targets -- whether artificial or conscious. Fundamentally, though, all you need to do is realize that you do indeed possess foresight and that you use your foresight to generate certain patterns, such as these comments, which wouldn't exist if your foresight did not exist. Do you agree or disagree? Will chance and law, *absent foresight* produce these comments? That is what the basic hypothesis of ID relies upon. Then, the math as laid out by Dembski provides a rigorous method of detecting previous intelligence (foresight) and shows why law and chance will not purchase CSI. All the examples -- observations -- back up the ID hypothesis and there are no counter examples. ROb: "By focusing on this blunder, we miss the meat of the issue, which is that ID seems to portray intelligence as random." I'm sorry but I missed your argument. PLease show me how the modeling of future possibilities, the generation of targets, and the harnessing of law and chance to generate those targets can be portrayed as "random." YOu may have meant that the outcome of intelligence can appear random upon first inspection, but that has already been dealt with earlier as both randomness and intelligence produce highly contingent patterns. Its just that intelligence produces highly contingent patterns which are functionally/meaningfully specified and use up all probabilistic resources (highly improbable), whereas chance/randomness does not. ROb: "Computers operate according to physical laws." Yes, once they are put together and programmed, they will follow physical laws. Everything within nature must follow physical law. Still, they are not "only" chance and law. They also contain that non-lawful and non-random, programming of states. Hmmmmm ... then that means that there exists something within nature which is both non-lawful and non-random. In this case it's called instructional information and it is derived from foresighted systems. There is a difference between "following" law and "incorporating chance" versus "reducible" to *only* law and chance as I've just shown above. ROb: "ID needs to spell out the distinction between “directed” and “undirected” in a scientific fashion, rather than a Webster definition." You mean something like "directed" = "modeling future possibilities, generating targets, and harnessing law and chance to generate those targets", and "undirected" = "the lack of such modeling and targeting". Again, refer to top of this comment re: definitions. ROb: "If a computer program uses data to predict the consequences of various courses of action, and then takes the course of action with the most favorable predicted consequence, does that count as “directed”? How about “intelligent”?" That counts as artificial intelligence, as I have explained above. Conscious foresight is actually able to envision future states, though, and that is what ID refers to as intelligence -- I would personally qualify that with conscious or "true" intelligence as opposed to artificial or non-conscious intelligence. We observe that all AI requires true intelligence in a complete causal chain. I'm honestly not seeing the point that you are trying to make re: "intelligence" "randomness" and "law." CJYman
Rob: First, do you understand computer architecture, classically "the machine language's view of the system"? A computer has nothing resembling foresight, as Tim has just exemplified. It is simply a machine that mechanically processes programmed instructions at bit level based on arrangements of logic gates and registers and clock and control signals, to give controlled predictable outputs. (Do you understand finite state machine algebra or register transfer algebra, or just plain old Boolean Algebra, gates and RS flip-flops and their extensions as D f/fs, JK f/fs, registers, counters, clocks, etc? These are what drive understanding what a PC does, how.) ALL the smarts in a computer is in the design put into its hardware and its programs and data structures. If you do not understand something that basic, sorry, but you are in no position to seriously discuss the issues you have raised. Instead you need to do some 101 level reading. As to the opposition between necessity and contingency, you continue to fail to understand that there are two very sharply distinct ways to be contingent, directed and undirected; the latter of which is directly familiar to every reasonably intelligent human being from how s/he interacts with the world. So, you are falling into self referential inconsistencies and selective hyperskepticism, even as you set out on creating a contextually responsive digital text string in English. Chance and design are quite distinct in ID inference contexts [and in engineering and in programming and in statistics and in management and in a lot of the rest of what people do in serious contexts in the world], and if you would take time to simply examine the repeatedly given die example you would see so through a concrete example. The explanatory filter (as adjusted to explicitly address aspects) is another way to look at it, from the view of the analysis of phenomena or objects by aspects. For this blog, we have in the glossary provided a description as follows:
Chance – undirected contingency. That is, events that come from a cluster of possible outcomes, but for which there is no decisive evidence that they are directed; especially where sampled or observed outcomes follow mathematical distributions tied to statistical models of randomness. (E.g. which side of a fair die is uppermost on tossing and tumbling then settling.) Contingency – here, possible outcomes that (by contrast with those of necessity) may vary significantly from case to case under reasonably similar initial conditions. (E.g. which side of a die is uppermost, whether it has been loaded or not, upon tossing, tumbling and settling.). Contingent [as opposed to necessary] beings begin to exist (and so are caused), need not exist in all possible worlds, and may/do go out of existence. Necessity — here, events that are triggered and controlled by mechanical forces that (together with initial conditions) reliably lead to given – sometimes simple (an unsupported heavy object falls) but also perhaps complicated — outcomes. (Newtonian dynamics is the classical model of such necessity.) In some cases, sensitive dependence on [or, “to”] initial conditions may leads to unpredictability of outcomes, due to cumulative amplification of the effects of noise or small, random/ accidental differences between initial and intervening conditions, or simply inevitable rounding errors in calculation. This is called “chaos.” Design — purposefully directed contingency. That is, the intelligent, creative manipulation of possible outcomes (and usually of objects, forces, materials, processes and trends) towards goals. (E.g. 1: writing a meaningful sentence or a functional computer program. E.g. 2: loading of a die to produce biased, often advantageous, outcomes. E.g. 3: the creation of a complex object such as a statue, or a stone arrow-head, or a computer, or a pocket knife.)
You may say what you want by way of selectivley hyperskeptical and self referentially inconsistent objections about "fuzzy concepts"; but here you have a specific cluster of definitions with examples. Kindly tell us in what way these are so fuzzy -- in the context of ourselves as rational, learning, purposeful, designing animals living in communities and civilisations that depend on design for technology -- that they cannot be empirically recognised and differentiated as distinct. (If you object, your objections must not fall into selective hyperskepticism [which is inherently self-referentially inconsistent], e.g. a good part of the definition of design above is actually base don the classical definition of what the profession of engineering is about. Similarly, that of necessity is very close to what a description of dynamics and differential or difference equation models is about -- the framework that Laplace had in mind when he talked about his demon; the classical modern model of determinism.) GEM of TKI kairosfocus
Tim:
Here’s foresight. On board one, two humans play each other and trade down to king chasing king. Ok, these guys aren’t the greatest chess players, but they eventually get some foresight “Uh, this is never going to end,” and agree to a draw. On board two, two computer programs trade down and chase until their batteries run out. No, they haven’t the foresight to offer a draw unless it’s been programmed in, and then, well then it is experience, not foresight. This could be made more rigorous,
Tim, that's a great illustration, and it actually has been made more rigorous in computing theory -- it's the halting problem. It's not hard to prove that no computer can look at any board of any game (not just standard chess) and determine whether rational play will result in a never-ending game. But it's also pretty clear that humans don't have that ability either. And given that an appropriately programmed computer can predict the consequences of chess moves better than any human can, it's not unreasonable to think that an appropriately programmed computer can predict, better than humans, whether games will go on forever. R0b
"Yes, I know that computers’ capabilities are bestowed by programmers. The relevant fact is that an appropriately programmed computer has the capability of foresight." --ROb I disagree. Computers, even appropriately programmed computers, are NOT capable of foresight. Although computers may demonstrate outputs commensurate with what would be called foresight in their human programmers, that does not mean that the computer is capable of foresight. A simple example: in a game of chess a very modest computer program may be able to weigh several future "states" of the board and after comparing the "value" of each, choose the best and push a pawn as opposed to knight. The player opposite may comment, "This program was able to see ahead and determine that moving the knight was a poor choice because I would have . . . " Thus, it is very tempting to say that the computer had foresight. The fact of the matter, though, is that the computer did nothing more than manipulate a ton of this: 101101001010100110111110001010101010110101011010101011010101000001 into some of this 10101101000101010100000111111111110 based on our experience of the game of chess. That's not computer foresight; it's number crunching. Here's foresight. On board one, two humans play each other and trade down to king chasing king. Ok, these guys aren't the greatest chess players, but they eventually get some foresight "Uh, this is never going to end," and agree to a draw. On board two, two computer programs trade down and chase until their batteries run out. No, they haven't the foresight to offer a draw unless it's been programmed in, and then, well then it is experience, not foresight. This could be made more rigorous, but I think it hints at a reason you are having such difficulty with the directed/random/law definitions. This is why I would also disagree with the idea "that computers are capable of directed contingency". More on that later. Jerry at 108, nicely put. The introduction of agency . . . ROb, you seem to want agency to be reducible to law or chance or to some category that is tied to them, but what if intelligence is not reducible in mechanistic way? I may have misstated your case, so nevermind that, but I do find that looking at agency in this way is actually a lot more helpful and for me, a lot more comprehensible. Tim
Okay, I don't have time to carefully read, much less respond to everyone's comments, so I'll try to summarize a few points: - A common point of criticism by the scientific community is ID's vagueness (cf "written in jello"). IMO, the definitions offered in the glossary lend weight to that criticism. They're great as a basis for endless philosophical debates, but they don't work as a basis for a scientific theory. The scientific community is waiting for a technical treatment of ID theory, not hand-holding explanations by way of examples and Webster definitions. - If JT recognizes that not all contingency is intelligence, then he should not have used the word "equates". The UD denizens are certainly correct on that. It is also a fact that "A is not equal to B" is not the same as "A is the complement of B". Characterizing JT's position on randomness and law as the former rather than the latter is unfair. If we recognize his position as the latter, then his blunder is that he should have said "entails" rather than "equates". By focusing on this blunder, we miss the meat of the issue, which is that ID seems to portray intelligence as random. - Computers operate according to physical laws. Yes, the state transitions depend on the current state, which includes the software stored on the system and the configuration of the hardware. And yes, software and hardware are typically designed by intelligent humans. But humans are not part of the computer system, so the mental process of designing hardware and software is separate from the process of program execution. The latter uncontroversially operates according to the laws of physics. I think that any disagreement on this issue is purely semantic. - If "contingency", "randomness", "non-determinism", and "chance" are distinct in ID theory, then ID should spell out that distinction. Atom seems to equate non-determinism with contingency, and randomness with undirected contingency. Is that how the ID camp in general uses those terms? Is there anywhere in the voluminous probability literature where I can read about this thing called "directed contingency" that is neither deterministic nor random? - ID needs to spell out the distinction between "directed" and "undirected" in a scientific fashion, rather than a Webster definition. If a computer program uses data to predict the consequences of various courses of action, and then takes the course of action with the most favorable predicted consequence, does that count as "directed"? How about "intelligent"? Hopefully I'll have more time later. R0b
ROb, "intelligence = randomness". What is clear is that you are not random er ... bFast
kairosfocus, I can only scratch the surface of your lengthy posts. I wish I had time to address them exhaustively, but I don't.
But, Rob and JT, randomness is NOT to be defined as or seen as non-determinism.
Says who? I'm happy to use ID's definitions of randomness and non-determinism if you'll tell me what they are.
As has been repeatedly stated — but just as repeatedly ignored: mechanical forces will give rise to natural regularities, but there are CONTINGENT situations where under remarkably similar initial conditions, quite diverse outcomes are possible.
Who ignored that fact? Your description of "CONTINGENT situations" is exactly that of non-deterministic processes (although it might also describe deterministic processes that are chaotic). Is contingency synonymous with non-determinism?
Nor is this a strange or unexpected definition [of intelligence]: it is immediately recognisable from our experience and observation of our fellow, rational and moral animals. Why, then is there now a pursuit of infinite regress by demanding “definition” of “directedness” vs “undirectedness”?
Because I want to know what ID means by "directed". If "directed contingency" is a key term in ID theory, then ID needs to define it. And pointing to a Webster or Wikipedia definition only reinforces the perception that ID is not a scientific theory. Also, the above definition doesn't tell us whether "ID says that “intelligence” is not reducible to law, matter and energy." So let's resolve that right now. Does ID say that or not?
Directedness is a subset of contingency, as has both been stated and exemplified.
Yes, I know, and I stated as much when I said that intelligence is also characterized by directedness. The question is whether directedness is related to or independent of determinacy.
Computer programs are capable of no more foresight than was written into them by their programmers. Stochastic inputs do not change that, they simply give rise to patterns based on the stochastic inputs, e.g Monte Carlo simulations.
Yes, I know that computers' capabilities are bestowed by programmers. The relevant fact is that an appropriately programmed computer has the capability of foresight. The output of a program can also be contingent. So it seems that computers are capable of directed contingency. Is that correct or not?
Your favourite rhetorical assertion that ID is assuming what it should not is exposed by the simple exercise of tossing dice, one loaded, one unloaded.
If you're talking about my point that ID assumes the irreducibility of intelligence/design, I get that straight from ID proponents. Would you like me to provide some quotes?
Can you tell the difference?
Of course. As I said before, the concept of determinacy if pretty well defined. If a die is loaded so it always lands on the same side, then throwing that die is a deterministic process. Throwing an unloaded die, on the other hand, is a classic illustration of non-determinism. Is non-determinism the same as contingency? If so, then your claim (is this ID's claim?) is that intelligence is non-deterministic. Is ID anti-compatibilist?
repeatedly been pointed out. e.g. at 96 by bFast: A is not equal to B,
As I've already pointed out, this doesn't fully state JT's position. Not only is randomness not equal to law/determinism, but the two are also complements. If intelligence is also the complement of law, then it follows that intelligence = randomness. If intelligence is only a subset of the complement of law, then it follows that intelligence entails randomness, but not necessarily vice-versa. If JT acknowledges the directed/undirected distinction as meaningful, then "equates" was not the right word. But do you agree that intelligence entails randomness? R0b
PS: Jerry, it is usually more useful to build up to metaphysics in light of empirical observations, per the comparative difficulties criterion of factual adequacy. In this case [cf the just above], we observe chance and design in action. Let the metaphysical chips lie where they fly, having made that observation. kairosfocus
CJY, 106:
Why would Kairosfocus disagree when he helped collect the definitions within the glossary of terms? Which definitions do you have a problem with and why? However, in conclusion, it seems that you don’t want examples because all the examples help bolster the case for ID and you have no counter examples. It seems that you wish to nitpick the definitions, which of course is not a problem if you have genuine concerns. Examples are extremely important in any science as it is the examples — observations — which either back up the science in question or falsify it.
Pree-zactly! Excellent. And the examples start at two repeatedly dropped, tumbling and settling dice. One fir, the other loaded: [1] Dropping on being let go -- natural regularity tracing to mechanical force. [2] tumbling to one of several possible outcomes under essentially similar initial conditions: contingency. [3] Die A ettling to one of 6 outcomes with odds about 1 in 6: chance. [4] Die B NOT settling to that pattern but to say having 6 uppermost 1/2 the time. Design, i.e directed contingency. Rob and JT can you see the differences? Why or why not? GEM of TKI kairosfocus
In the process of trying to find what I have said in the past, I came across this comment I made about 3 years ago. Which just goes to show us that there are few new discussions here. This is what I said about chance, law and intelligence three years ago "Anytime the discussion gets philosophical, I get uneasy because the terms used are very general and that to me means vague. I am sure they have precision but the precision is not in the lexicon we commonly use. Given that, I have a few comments. Is there really such a thing as chance? Or when we say chance do we mean that we do not understand all the forces or complexities that underline a situation and what appears or happens because of our insufficient knowledge is then described as random or by chance. If all the laws are working and there is nothing but atoms, quarks, mesons, etc, are not the existence, characteristics and motion of each really determined by some basic physical laws. Now I understand that there is something called the uncertainty principle and that Quantum Mechanics causes some unusual results but does this mean that some of these particles are not following some basic set of laws. I understand that we may not know just what some of these laws and forces might be but that does not mean these basic laws are not operating and causing these particles to behave in a specific manner. I also understand that the complexity of the particles and forces involved may be beyond any calculations we are capable of but that also does not mean that basic laws are not behind everything. So is what we mean by chance really just a subset of law? Isn’t the term “agency” just the introduction of free will of some intelligence into the equation? This intelligence exerts some new force into the physical world, thus moving the basic particles either here or there depending upon the nature of the force created by the intelligence. In other words the forces that would ordinarily be operating based on laws are modified somewhat by a new force caused by a freely thought out action of an intelligence. If it is not freely thought out then we have to assume the “so called” intelligence is determined by some other laws we may not be aware of. I know that chance has been discussed in detail elsewhere on this blog but I am just trying to put the framework offered here in this post into some other framework that I can understand and possibly discuss with a typical person who doesn’t have a background in philosophy. And to also understand it better myself. In other words we have laws and then we have free will. And I understand that philosophers have been discussing this topic for a few thousand years with no consensus." jerry
ROb: "It should be clear by now that JT sees law and randomness as complements, which is logically the case if law=determinism and randomness=non-determinism. So if intelligence doesn’t fall in the category of law, then it must fall in the category of randomness." Didn't I already deal with this logical fallacy when I pointed out, as everyone already knows ... "A is not equal to B C is not equal to B Therefore, A = C" ... is completely illogical. Figure it out already!!!! As Atom said above, move along now, nothing more to see here ... CJYman
Pattern described by low contingency = pattern defined by a mathematical description of regularity (law). Pattern described by high contingency = pattern not defined by mathematical description of regularity. Pseudo random generators do not generate true contingency since there are regularities inherent in the outputs. They are described by chance + law. There is a consistent regularity in the output, although it appears random in the short term. True randomness is an example of high contingency since consistent regularities are not found within background noise. True randomness is described by chance even though it is the output of a conglomerate of a chaotic assembly of laws. There is no consistent regularity in the output. Check out random.org -- especially the write up re: bitmap images and the difference between true randomness and pseudo-randomness. It is a wealth of information in relation to this topic. Either way, though, neither pseudo randomness nor true randomness produce the same type of patterns that are the result of the modeling of future possibilities, target generation, and harnessing law and chance to produce those targets. This foresight produces functionally specified patterns in which the info content uses up all probabilistic resources, whether this foresight is artificial or conscious. Tell me, do you possess foresight and do you use your foresight to produce these comments of yours? Would chance and law *absent your foresight* be a good explanation for these comments that appear to come from a foresighted agent with the handle "ROb?" ROb: "I think that solid definitions would be more helpful to ID’s case than examples. (Kairosfocus might disagree.)" Why would Kairosfocus disagree when he helped collect the definitions within the glossary of terms? Which definitions do you have a problem with and why? However, in conclusion, it seems that you don't want examples because all the examples help bolster the case for ID and you have no counter examples. It seems that you wish to nitpick the definitions, which of course is not a problem if you have genuine concerns. Examples are extremely important in any science as it is the examples -- observations -- which either back up the science in question or falsify it. CJYman
R0b wrote:
It should be clear by now that JT sees law and randomness as complements, which is logically the case if law=determinism and randomness=non-determinism. So if intelligence doesn’t fall in the category of law, then it must fall in the category of randomness.
Sorry if JT has problems with classification and therefore is making false dichotomies, but don't blame us if we don't follow his example. If Law is Determinism, then the complement is Non-Determinism, not "randomness". This is simple. Now JT wants to take the extra step and claim (implicitly): "All non-determinism is randomness". Excuse me!? Sorry if we don't hold the same metaphysical belief he does. Randomness is one type of non-determinism; Agency is another. One is goal-directed, the other is not. Very simple. Just admit it, JT made a blunder and no amount of smoke at this point is going to cover that up.Just man up and move on. Or you can keep trying to convince us that all non-determinism is equal to randomness. (I hope you have some way of demonstrating this claim...) Atom Atom
Lets consider this by example.
I appreciate that, but I think that solid definitions would be more helpful to ID's case than examples. (Kairosfocus might disagree.) R0b
Tim:
Both the novel and the physical state of the computer changed, but they did not evolve except according to the novelist’s will and the initial information that was encoded, and I think we know how that type of evolution matches with Neo-Darwinian evolution. . . er, not very well.
When I talked about the system state evolving, I wasn't making any allusion to biological evolution. Sorry if that was confusing. R0b
Tim:
“If “foresight” makes a process “directed”, then computers are apparently capable of directed contigency, as long they have a random number generator that meets ID’s definition of contingency.” And I wrote this: “If bald foreheads make a man sexy, then computers are apparently sexy as long as they have a randomly chosen surface that is shiny and hairless thus somehow matching someone’s definition of bald.”
If your statement is analogous to mine, then apparently a random number generator (even one that is QM-based) does not meet ID's definition of contingency. I'll take that as a data point in my never-ending search for ID's definition of contingency. R0b
JT and ROb, What about those laws? I ask because they too are evidence for ID and a designer. Joseph
PS: In my always linked, I discuss the issue of the origin of functionally specific complex information, in the context of lucky noise vs mind. In so doing, I already pointed out that there is a significant threshold of complexity [i.e no of bits so config space scale at 2^n] that has to be crossed -- per chance + necessity only -- by random generation of patterns before selection processes can acto on differential functionality to hill-climb to optimality. In short you have to get to the beach of an island in the ocean of possibilities before you can think about climbing to its mountain tops of peak performance. In discussing this, I have found it important to raise an issue on the link between views on the origin of mind and the implications for reliability; that evo mat advocates will doubtless find challenging or even painful, but I think we need to soberly think it through, especially given what has run on above: _____________ . . . [evolutionary] materialism [a worldview that often likes to wear the mantle of "science"] . . . argues that the cosmos is the product of chance interactions of matter and energy, within the constraint of the laws of nature. Therefore, all phenomena in the universe, without residue, are determined by the working of purposeless laws acting on material objects, under the direct or indirect control of chance. But human thought, clearly a phenomenon in the universe, must now fit into this picture. Thus, what we subjectively experience as "thoughts" and "conclusions" [as well as "purposes," "goals," "plans" and "designs"}can only be understood materialistically as unintended by-products of the natural forces which cause and control the electro-chemical events going on in neural networks in our brains. (These forces are viewed as ultimately physical, but are taken to be partly mediated through a complex pattern of genetic inheritance ["nature"] and psycho-social conditioning ["nurture"], within the framework of human culture [i.e. socio-cultural conditioning and resulting/associated relativism].) Therefore, if materialism is true, the "thoughts" we have and the "conclusions" we reach, without residue, are produced and controlled by forces that are irrelevant to purpose, truth, or validity. Of course, the conclusions of such arguments may still happen to be true, by lucky coincidence — but we have no rational grounds for relying on the “reasoning” that has led us to feel that we have “proved” them. And, if our materialist friends then say: “But, we can always apply scientific tests, through observation, experiment and measurement,” then we must note that to demonstrate that such tests provide empirical support to their theories requires the use of the very process of reasoning which they have discredited! Thus, evolutionary materialism reduces reason itself to the status of illusion. But, immediately, that includes “Materialism.” For instance, Marxists commonly deride opponents for their “bourgeois class conditioning” — but what of the effect of their own class origins? Freudians frequently dismiss qualms about their loosening of moral restraints by alluding to the impact of strict potty training on their “up-tight” critics — but doesn’t this cut both ways? And, should we not simply ask a Behaviourist whether s/he is simply another operantly conditioned rat trapped in the cosmic maze? In the end, materialism is based on self-defeating logic . . . . In Law, Government, and Public Policy, the same bitter seed has shot up the idea that "Right" and "Wrong" are simply arbitrary social conventions. This has often led to the adoption of hypocritical, inconsistent, futile and self-destructive public policies . . . . In short, ideas sprout roots, shoot up into all aspects of life, and have consequences in the real world . . . __________________ Okay, onlookers, is this what is really going on under the surface of the abocve? Why or why not? kairosfocus
6] Rob, 97: If intelligence, according to ID, isn’t reducible to law, and if the term “law” indicates that a process is deterministic, which seems a reasonable interpretation, then intelligence must be non-deterministic. Other terms that are commonly used interchangeably with “non-deterministic” are “stochastic” and “random”. Onlookers, we experience and observe every day, routinely, that there is another form of highly contingent process: design, premised on intelligence. And, we have repeartedly pointed out that the proper opposition is: low/no contingency vs high contingency,t he latter being in some cases undirected, in otehrs directed. So, to insist like Rob has done in the above clip -- sad to say -- is to willfully set up and knock over a strawman. Let us agasin get a testimony against interest from Wikipedia:
Design is used both as a noun and a verb. The term is often tied to the various applied arts and engineering (See design disciplines below). As a verb, "to design" refers to the process of originating and developing a plan for a product, structure, system, or component with intention[1]. As a noun, "a design" is used for either the final (solution) plan (e.g. proposal, drawing, model, description) or the result of implementing that plan in the form of the final product of a design process[2].
Plainly, we are not being idiosyncratic; as even notoriously anti-ID Wikipedia has had to acknowledge here. The saddest thing about this, is that in other contexts where we will not be heard in our own voice, such strawmen will be presented as what we think, adn will be taken as gospel truth. 7] If “foresight” makes a process “directed”, then computers are apparently capable of directed contigency, as long they have a random number generator that meets ID’s definition of contingency. Again, Rob, there are two distinct varieties of the contingent: directed and undirected. You have -- predictably -- substituted undirected for directed. Also, programs have no more insight or foresight than was written into them by their programmers in the algorithms and data input from the situation they arte applied to. GIGO -- garbage in, garbage out. 8] When a non-interactive program is executing, the physical state of the system evolves according to the laws of physics. This is true regardless of how the system got into the state that included a loaded program that was starting to run, which means that it’s independent of the question of who or what designed and programmed it. This is what we mean when we say that computers operate according to law, or law and chance. The fundamental cycle of programming is: input, process, output. Whether the inputs were stored in input data structures memory or no, or even written into the code being executed, inputs there are. Similarly, processing is based on the inputted design of the algorithm, and reflects its assumptions about the world. Thirdly, the laws of nature and the laws of mathematics etc constrain the design of a processor and its functioning, they do not determine it; cf the architectures of a 4004 of 1971 with a 68020 of 1993 or at Pentium whatever level, etc. So, processor behaviour as that of a physical hardware object carrying out a soft-ware program based on microcode [or even hard-wired instruction execution] is NOT independent of the intelligent inputs of the designers involved. Engineering uses the forces and materials of nature to intelligently and economically achieve goals by creating designed structures and processes, hopefully for the benefit of humanity. And, as the growing body of patents discloses, this is highly contingent, creative tot he point of allowing intellectual property rights, and non-random. In short pardon, your selective, self-referentially incoherent, hyperskepticism, is showing. 9] if anyone thinks that JT’s or my usage of ID terms is unreasonable, then they should work on coming up with definitions that don’t raise more questions than they answer, and then using the terms consistently. Onlookers, all of this is in a context where there is a whole vocabulary discussed above in a glossary. As for one definition leading to further and further questions, the first underlying rhetorical issue is that we are looking here at a refusal to seriously interact with how we have identified concepts by reference to key case studies, which allows us to escape the infinite regress of inferences without resort to circularity. Rob, go get those two dice and play with them for a while, then come back and tell us what you learned. (E.g. Why not go to Las Vegas and try to play a dice game with loaded dice and see what happens? Tell us why, in light of say JT's equating of undirected and directed contingency. [Of course, this last one is strictly a thought exercise! We don't want Rob to go to gaol as a cheat.]) The second rhetorical tactic is to pretend that we are using words in idiosyncratic and confusing ways. In fact, as the very glossary testifies, we are using terms in quite common and standard ways. Ways that even Wikipedia with an interest against ID is forced to acknowledge as legitimate. So, we have a reasonable expectation that informed readers such as Rob and JT will recognise those ways, especially since we have given concrete examples that can be carried out as experiments -- e.g. fair vs loaded dice. Thirdly, there is a turnabout false accusation involved. For, it is JT (backed up by Rob) who is using plainly idiosyncratic "definitions," when he attempts to equate chance and design. By sharpest contrast, ever since the days of Plato, we have recognised the distinction, and it continues to be useful today, as even Wikipedia has to acknowledge. indeed, in law, we recognise that sufficiently innovative designs are intellectual property. Also, we know that where such designs involve irreducibly complex systems and/or functionally specific complex information in excess of say 1,000 bits of capacity, no reasonable random search will get to the islands of function int eh config space, on basic probability calculations. Nor is there good reason to believe that teh substance of Microsoft Office 2007 is written into the laws of the universe, even if there is a blend of chance and necessity at work. Nor did Mr Gates hire a zoo full of monkeys to create it by pounding keyboards at random. nor did he set up a random search and select for function process as his primary design tool. All of this is perfectly patently obvious. _________________ Onlookers, the reduction to absurdity implicit in evolutionary materialism is becoming ever more painfully plain in this thread. Sadly so. Rob and JT, surely, you can do better than this! GEM of TKI kairosfocus
Ah, boy . . . One could not make up the last few dozen exchanges in this thread! Priceless, but in another sense, ever so sadly revealing on what is going on with evolutionary materialist thought, which is plainly now at the point where the bankruptcy is obvious to all who will but look. (However, proverbially, there are those who claim to be sighted but . .. ) On a few points: 1] Rob, 97: JT sees law and randomness as complements, which is logically the case if law=determinism and randomness=non-determinism. So if intelligence doesn’t fall in the category of law, then it must fall in the category of randomness. But, Rob and JT, randomness is NOT to be defined as or seen as non-determinism. that is an artifact of the evolutionary materialist view imposed a priori, a la Lewontin. And, it is a point where the factual adequacy gap between what evo mat permits and what we observe and experience is obvious. As has been repeatedly stated -- but just as repeatedly ignored: mechanical forces will give rise to natural regularities, but there are CONTINGENT situations where under remarkably similar initial conditions, quite diverse outcomes are possible. That this last is a close as the tossing of a fair vs a loaded die, and as close as the falling that reliably happens when such a die is no longer supported, should tell us just how pervasive and accessible the relevant facts are. So, one is reminded of the parable of Plato's cave: someone has got out, and has returned, inviting his fgellows to join him in gettingup, looking around and seeing the apparatus of projection of the shadow show,t hen to climb out of the cave. But, for the "true believer" in evo mat: reality "cannot" be different from the shadow-show . . . 2] Presumably, the ID position is that intelligence is characterized not only by contingency (can we agree that this means randomness?), but also by directedness. So now ID’s task is to come up with a reasonably unambiguous definition of the distinction between “directed” and “undirected”. Onlookers, observe: at the top-right of this page, there is a glossary. In that glossary is an item on intelligence, and attention was drawn to that item specifically in post no 89. The definition constitutes a citation from a known anti-ID source (a la admission against interest), Wikipedia. namely:
Intelligence – Wikipedia aptly and succinctly defines: “capacities to reason, to plan [which plainly implies foresight and is directly connected to the task of designing], to solve problems [again foresighted and goal directed], to think abstractly, to comprehend ideas, to use language, and to learn.”
Nor is this a strange or unexpected definition: it is immediately recognisable from our experience and observation of our fellow, rational and moral animals. Why, then is there now a pursuit of infinite regress by demanding "definition" of "directedness" vs "undirectedness"? Because of confusion between concept formation through experience of sufficient examples to infer and label a pattern and the role of definition as identifying borders of concepts. We point to experience and observation of concrete cases appealing to family resemblance and the pattern-recognising capacity of the mind. We point out that such is logically prior to precising statements or genus-difference taxonomies, etc. Indeed, we check statements for adequacy against known cases and counter-cases. (And lurking in the backgrounsd is the point that there are some truths that once we as rational-moral animals experience enough and come to understand, we see they must be so; i.e these are self-evident.) So, we point to the cases already given, and ask for interaction with them, a dropped fair vs a dropped loaded die; a fork in the road taken by choice vs at random, design of aircraft, etc. Too often, only to be ignored or dismissed as the other parties rush on to reductio ad absurdum. 3] Is directedness a sub-spectrum of the determinacy spectrum, or is it orthogonal to determinacy? If computer programs are capable of foresight, does the execution of such a program, with a stochastic input, constitute directed contingency? Is directed contingency the same as libertarian free will? Where can I find a usable definition of directed/undirected in the ID literature? See what we mean? Directedness is a subset of contingency, as has both been stated and exemplified. Rob, go get yourself two dice, one loaded, one fair. toss them a few dozen times. What is regular, what is diverse? What is stochastic and what is purposeful and goal-directed? Computer programs are capable of no more foresight than was written into them by their programmers. Stochastic inputs do not change that, they simply give rise to patterns based on the stochastic inputs, e.g Monte Carlo simulations. Whether or no there is libertarian free will as an ontological matter, we observe and experience directed contingency. ID starts with that fact of experience, and anchors itself to that realm. Your favourite rhetorical assertion that ID is assuming what it should not is exposed by the simple exercise of tossing dice, one loaded, one unloaded. Can you tell the difference? Why or why not? [And if we onlookers can see you thus ignoring or rejecting obvious facts, do you not see that you are deducing yourself to absurdities before our eyes?] 4] Rob, 90: JT. If you want to have the same kind of success that the ID movement enjoys, then you need to learn some things from UD denizens. Yes, Galileo: if you want to experience the same success as the Simplicio's of this world, you really need to stop listening to those silly Copernicans. It will only get you into trouble with the Magisterium to keep on raising silly questions about gaps in the well-proven Ptolemaic theory! It has no weaknesses! None! 5] Do you really not understand JT’s points, or are you only pretending to misunderstand them so that you can insult him? . . . . When JT talks about ID’s conception of intelligence, he is referring to the idea commonly expressed by ID proponents, along the lines of “ID says that “intelligence” is not reducible to law, matter and energy.” Rob, the issue is not with "misunderstand[ing]" JT, it is that JT is quite evidently and even obviously reducing himself to absurdity before our shocked eyes; and you are trying to tell us not to believe our "lyin eyes." We are telling you instead: please, stop the intellectual self-destruction! Please. PLEASE . . . ! For example, JT is simplistically and EXPLICITLY equating intelligence with randomness [in so many words, cf above], ending up in the logical fallacy that has now repeatedly been pointed out. e.g. at 96 by bFast:
A is not equal to B, C is not equal to B, Therefore, A is equal to C. Let A = Random. Let B = Deterministic. Let C = Intelligent agency. A (random) is not B (deterministic) C (Intelligence) is not B (deterministic) Therefore A (random) = C(Intelligence).
It is not an "insult" to point out gross error that has serious real-world consequences, and to correct it; but to act responsibly. Nor, to have to be forced -- in the face of insistence on error -- to reluctantly point out that a reductio ad absurdum is in progress. Wish the intellectual self-destruction were not so, but -- sadly -- it is. [. . . ] kairosfocus
Atom, you and CJYMan are imputing poor thinking to JT on the basis of this statement: "I am saying that the I.D. conception of “intelligence” or “intelligent agency” equates to randomness because ID says it is something distinct from law." It should be clear by now that JT sees law and randomness as complements, which is logically the case if law=determinism and randomness=non-determinism. So if intelligence doesn't fall in the category of law, then it must fall in the category of randomness. Of course, "equates" implies not only that intelligence is random, but also that everything random is intelligent. If JT didn't mean that, then equates was not the right term for him to use. If he meant to say that ID's version of intelligence entails randomness, do you agree with him? Presumably, the ID position is that intelligence is characterized not only by contingency (can we agree that this means randomness?), but also by directedness. So now ID's task is to come up with a reasonably unambiguous definition of the distinction between "directed" and "undirected". Contrast directedness with determinacy, which is well-defined. We can treat determinacy as a boolean variable -- that is, processes are either fully deterministic or they're not. Or we can talk about a continuum with non-deterministic at one end and deterministic at the other. Is directedness a sub-spectrum of the determinacy spectrum, or is it orthogonal to determinacy? If computer programs are capable of foresight, does the execution of such a program, with a stochastic input, constitute directed contingency? Is directed contingency the same as libertarian free will? Where can I find a usable definition of directed/undirected in the ID literature? R0b
ROb:
If intelligence, according to ID, isn’t reducible to law, and if the term “law” indicates that a process is deterministic, which seems a reasonable interpretation, then intelligence must be non-deterministic. Other terms that are commonly used interchangeably with “non-deterministic” are “stochastic” and “random”.
Lets consider this by example. You have a pool table with some balls on it. You strike the cue ball with the pool cue. From this point on, what happens on the table is governed by law, it is deterministic. In other words, if you programmed the situation into a computer, accurately measuring everything worth measuring, the computer could accurately predict where all of the balls will end up. Now, introduce randomness to the pool table after the cue ball is struck. (I don't know, vibrate the thing in a truly random way.) Your computer program can no longer accurately predict where the balls will end up. Introduce intelligence. The cue ball is struck, but under the table are a bunch of pegs that can be pushed to raise up lumps in the table. Have an intelligent agent guide the balls to where he would have them go. The computer program cannot predict where the balls will go, it is not deterministic. However, it is also not random. CJYman:
A is not equal to B, C is not equal to B, Therefore, A is equal to C.
Let A = Random. Let B = Deterministic. Let C = Intelligent agency. A (random) is not B (deterministic) C (Intelligence) is not B (deterministic) Therefore A (random) = C(Intelligence). Man this conversation is stupid! bFast
ROb, you wrote this: "Speaking of computers: When a non-interactive program is executing, the physical state of the system evolves according to the laws of physics." and I wrote this: "As he spent month after month working through a Parisian moveable feast, Hemingway found that his Old Man and the Sea evolved very nicely." And yet again, we . . . hey wait a minute! We are sort of finally making some sense! Well, except for the use of the word evolve. Both the novel and the physical state of the computer changed, but they did not evolve except according to the novelist's will and the initial information that was encoded, and I think we know how that type of evolution matches with Neo-Darwinian evolution. . . er, not very well. Tim
Tim:
Of course, at this point I take back what I wrote, how about you?
No, but I will gladly do so if you tell me how it doesn't make sense. To speak of the reducibility of cats to dogs is a category error. I don't see how speaking of the reducibility of intelligence to law has the same problem. If it does, then you should inform your fellow ID proponents. R0b
ROb, you wrote this: "If “foresight” makes a process “directed”, then computers are apparently capable of directed contigency, as long they have a random number generator that meets ID’s definition of contingency." And I wrote this: "If bald foreheads make a man sexy, then computers are apparently sexy as long as they have a randomly chosen surface that is shiny and hairless thus somehow matching someone's definition of bald." Again, neither one of us is making any sense, and I retract my statement. Tim
R0b, I'm sure JT appreciates you coming to his defense, but he really is making bad points. As CJYMan sums up:
Your reasoning ability is horribly lacking at best. You are stating: A is not equal to B, C is not equal to B, Therefore, A is equal to C.
Intelligence is non-deterministic; randomness is non-deterministic; therefore, Intelligence equals randomness. That really is bad thinking, no soft way of saying it. As I pointed out (and others have as well), while Intelligence appears to be non-deterministic, it is also simultaneously directed. KF made this point, I made this point, everyone it seems have made this point, but you guys either miss it or don't understand the importance of it. Again, I offer my simple analogy of two forks in the road: 1) Law (determinism) says "Always take the left road" 2) Randomness says "I will take the left 50% of the time, and the right 50% of the time." ...however... 3) Intelligence says "I will take the path that leads me to the destination I'm headed to." While statistically the left-right choices of an intelligent agent may appear to almost mimick randomness (50-50 split), they don't have to and sometimes will not. They are contingent choices. Randomness is contingent, Intelligence is contingent, but Intelligence != Randomness. Furthermore, there is already a theoretical model dealing with contingent decision-making computation devices: Non-deterministic Automata. I alreayd mentioned this as well. When dealing with NFA's it is implicitly assumed that if an accepted final state can be reached by some possible path, that the NFA will reach it. (In other words, we model that it makes non-deterministic choices, meaning different outcomes for the same state/input combination, and that there is a sense of teleology in that we're seeking out accepted final states.) It almost seems like you're getting frustrated at the conversation, but we're not. (At least I'm not.) I think it's funny how JT is trying to push such a bad point. Continue on. Atom Atom
ROb, you wrote this: "If intelligence, according to ID, isn’t reducible to law, and if the term “law” indicates that a process is deterministic, which seems a reasonable interpretation, then intelligence must be non-deterministic. Other terms that are commonly used interchangeably with “non-deterministic” are “stochastic” and “random”." And I wrote this: If cats are "not reducible" to dogs and dogs are furry then cats must be non-furry. And neither one of us are making any sense. Of course, at this point I take back what I wrote, how about you? Tim
kairosfocus:
I think you need to pause and do some learning from those you would object to, or you will simply reduce your case int ever worse depths of reduction to absurdity.
Yes, JT. If you want to have the same kind of success that the ID movement enjoys, then you need to learn some things from UD denizens. kairosfocus, CJYMan, and Tim: Do you really not understand JT's points, or are you only pretending to misunderstand them so that you can insult him? I'll assume the former and make a feeble attempt and explaining them. When JT talks about ID's conception of intelligence, he is referring to the idea commonly expressed by ID proponents, along the lines of "ID says that “intelligence” is not reducible to law, matter and energy." If intelligence, according to ID, isn't reducible to law, and if the term "law" indicates that a process is deterministic, which seems a reasonable interpretation, then intelligence must be non-deterministic. Other terms that are commonly used interchangeably with "non-deterministic" are "stochastic" and "random". Furthermore, "intelligence", "design", and "agency" seem to be related terms in ID terminology, so presumably those terms entail non-determinism also. ID proponents sometimes use the term "contingency", although they don't agree on what they mean by it. Furthermore, they differentiate between "directed" and "undirected" contingency. If "foresight" makes a process "directed", then computers are apparently capable of directed contigency, as long they have a random number generator that meets ID's definition of contingency. Speaking of computers: When a non-interactive program is executing, the physical state of the system evolves according to the laws of physics. This is true regardless of how the system got into the state that included a loaded program that was starting to run, which means that it's independent of the question of who or what designed and programmed it. This is what we mean when we say that computers operate according to law, or law and chance. If there are other points that you don't understand, you might try asking questions instead of hurling insults. If the ID camp wants to improve its status in the research and academic communities, then kairosfocus's advice is better directed to ID proponents. And if anyone thinks that JT's or my usage of ID terms is unreasonable, then they should work on coming up with definitions that don't raise more questions than they answer, and then using the terms consistently. R0b
Pardon: Accidentally cross-threaded JT: Here is how the UD glossary defines intelligence acceptably for ID purposes:
Intelligence – Wikipedia aptly and succinctly defines: “capacities to reason, to plan [which plainly implies foresight and is directly connected to the task of designing], to solve problems [again foresighted and goal directed], to think abstractly, to comprehend ideas, to use language, and to learn.”
In short, we are not using any unusual or idiosyncratic definition. Indeed, we used the Wiki definition for the excellent reason that it is an admission against interest by an entity known to be strongly opposed to ID, to the point of willful, insistent distortion and slander. That’s about the strongest form of evidence you can get: what intelligence is, is so well and so widely understood, that they could not come up with an “acceptable” definition that would cut off ID at the knees. GEM of TKI PS: JT, to save yourself further embarrassment, kindly take some time out and read the ID glossary and weak argument correctives. kairosfocus
JT, 85:
When I say that an AI program operates according to chance and necessity I mean It operates according to a program. The ‘chance’ aspect would enter in primarily if there are chance attributes in the program’s [e.g. a robot’s] environment. By saying it operates according to chance and necessity I do not mean that the program fell together by chance.
1 --> Programs work by algorithms, implemented through arbitrary symbolic codes that are dynamically inert but informationally functional; executed physically through specific irreducibly complex architectures, i.e. particular and specific organisations of processors and associated elements. (Pardon a bit of bio, but it is relvant to my point: I got to the stage where I could "read" 6800 and 6809 hex codes directly at one time . . . and hex code for one 6800 system will not work in another one set up with a different memory map, much less a 6502 system (though the hardware was compatible) and certainly not in the architecturally very similar PDP 11; of which the 6800 family was in effect an 8-bit port. Don't even try to go feed a 6800 EPROM over to an 8080 or a Z80! You will "let some smoke out" of the chips for sure!! [I never did "get" that address/data bus thingie . . . even though it was true that A and D fetches are temporally disconnected.]) 2 --> Natural law works by dynamical forces and patterns tracing to strong and weak nuclear forces, electro-magnetic forces and gravitation. A completely different pattern. 3 --> It seems fairly clear, therefore, that the only way you could say the excerpted is because -- sad to have to be direct -- you do not understand the nature and role of information in information systems; especially at decision nodes. 4 --> I have already pointed out that contingency is distinct from lawlike necessity giving rise to natural regularities, and that it may happen in two distinctive, empirically recognnisable ways: (1) undirected, stochastic contingency (chance); (2) purposefully directed contingency (design). 5 --> A program is the latter, including an AI program, and Dr Dembski's explanatory filter is predicated upon that difference. (NB: In the original form, he did not sufficiently emphasise that he is looking at particular isolable aspects of the behaviour of systems or objects; which I and others now have. By integrating the analyses of the various aspects, one may see how the whole operates in ways that bring together chance, necessity and design, without confusing any of the three as -- pardon me -- you have.) _______________ JT, pardon some direct advice: I think you need to pause and do some learning from those you would object to, or you will simply reduce your case int ever worse depths of reduction to absurdity. GEM of TKI kairosfocus
JT: "I am saying that the I.D. conception of “intelligence” or “intelligent agency” equates to randomness because ID says it is something distinct from law." Your reasoning ability is horribly lacking at best. You are stating: A is not equal to B, C is not equal to B, Therefore, A is equal to C. If you can't figure out the error, I can see why some people here can't continue the discussion with you. CJYman
Sorry to keep this going just a bit longer, but I have some hope that JT may just "get it" very soon. I asked: "Do you use “magic” when you have a future goal of communicating a concept with someone on this board and then engineer a comment in order to attempt to acquire that future goal? Does an AI system use “magic” to model the future and then generate a target and steer itself toward that target — ie: a chess program samples future possible states and then follows the rules of the game while aiming at a future goal of winning the game according to specified rules." JT, you responded: "OK so now you’re saying an AI system is intellgent according to I.D.s defintion of the term But an AI system operates according to chance and necessity." First, just answer the question, since it has everything to do with your claim that intelligence is equal to randomness and that intelligence is no different than "magic." You are the one claiming that appealing to intelligence is just like appealing to "magic" while at the same time completely ignoring one fundamental aspect whereby most ID proponents define intelligence -- that is "foresight." Do you have and do you ever use your foresight (as I have defined it for the purpose of ID theory). Second, AI systems are called *artificial intelligence" for a reason. They model future possibilities and work toward a future target which does not yet exist, as in the chess program example. Thus, they have the most rudimentary form of foresight without being conscious of their foresight. They have artificial, as opposed to conscious or "real" foresight. Finally, AI systems are more than just law and chance. As already explained, and completely ignored by yourself, AI fundamentally consists of programming. There is a very important distinction between this programming (instructional information) and law or chance. The distinction is that this instructional information is made up of a sequential organization of states which is neither defined by law as mathematical descriptions of regularity, nor is this organization defined by law as a result of any physical-chemical properties of those states. Furthermore, chance/statistical randomness/background noise is not a good explanation of the programming necessary to create AI and I'm sure you would agree. If not, please provide evidence for your assertions. KF unfortunately had to remind you of the very simple fact that the programming necessary for an AI system comes from a programmer using his foresight (one aspect of intelligence). JT: "Something that can be described by a program is not I.D.’s intelligence, unless I.D has changed their meaning of the term." Some IDers think that conscious intelligence can't be described by a program. I am agnostic on that, however, this contention is not necessary in order to define aspects of intelligence and then detect its effects. I have merely shown that some programs do indeed have a non-conscious form of foresight. Thing is, though, that these AI systems require conscious intelligence (conscious foresight) in their full causal chain. Furthermore, even if conscious intelligence were able to be defined as a program that still doesn't show that it is reducible to chance and law, because of the instructional information (non lawful, non random, highly improbable, functionally specified states) at the base of the program. This would then further require previous intelligence and thus the potential of an intelligence - information - intelligence loop with no room for merely chance and law. JT: "Its evident you’ve changed the meaning." Nope, foresight is definitely integral to intelligence. Look up the meaning of intelligence and you will see that most terms which define intelligence require foresight. I stated: "Intelligence is describable by laws *plus* information." JT: "I think you said somewhere that an AI program is laws plus information." Yes, I've stated and also shown that over and over and all you have for a comeback is ... JT: "an AI program is chance + necessity." ... which I have shown to be incorrect. JT, you then conclude with: "If necessity is above a certain threshold of complexity you evidently want to rename it “information”. Rename whatever you want." The "information" that people here, including myself, have been trying to explain to you is the exact opposite of "necessity" (low contingency). It has nothing to do with "necessity" above a certain threshold so I have no idea what "renaming" you are talking about. There are two types of information that have been explained to you in this discussion: 1. CSI, which is not just "necessity above a certain threshold of complexity." It is a highly contingent pattern of specified complexity which uses up all probabilistic resources. Dembski merely took a term which was created before him (as KF has explained over and over and over again) and put some math behind it. 2. Instructional information. This is the information that I have been trying to explain to you for quite some time now. I've explained the significance of instructional information (which is neither defined nor caused by law nor best explained by chance) to you above. CJYman
KF: When I say that an AI program operates according to chance and necessity I mean It operates according to a program. The ‘chance’ aspect would enter in primarily if there are chance attributes in the program’s [e.g. a robot’s] environment. By saying it operates according to chance and necessity I do not mean that the program fell together by chance. Tim:
There are certainly those who claim that the ideas listed above are illusion, mere memes, pre-determined, and now equated in some way to randomness
You’re just not reading my posts or something. I do not think anything operating according to law is random – that would include program, animals and so on. I am saying that the I.D. conception of “intelligence” or “intelligent agency” equates to randomness because ID says it is something distinct from law. Bill Dembski would (in fact should) inform you all that a computer program operates according to chance and necessity. I don’t have any more time to devote to this right now. I may try to come back and revisit this discussion later today. JT
JT, look what you did with the following claim concerning the agency. You wrote that "[Dembski] just continues to blithely assume that ‘agency’ is self evident." Your argument seems to revolving around saying that Dembski is merely being blythe, and by implication, uncritical, blind, stubborn and stupid. I think you should consider the following . . . Don’t you realize that everybody finds agency to be self evident? Notice that I did not say that everybody believes in agency, or in free will, or in creative power, or foresight, etc . . . What I did say is that everyone finds it to be self evident. There are certainly those who claim that the ideas listed above are illusion, mere memes, pre-determined, and now equated in some way to randomness (thanks, but really do you want to go there?), etc . .. My point is this: Look at the sophistry needed to maintain such a point; look how it leads in so many ways to ridiculous conclusions concerning epistemology; look at the utter vacuity of any type of heuristic advantage for those who hold such a view; look at the company you keep! CJYman's and KF’s posts are devastating to your position. Instead of latching on to and extending a new argument yet again, try to rebut the general ideas; otherwise, you come off as a troll. IMO, you have done nothing to support your assertion except to rely on the idea that, well, that you have no real intelligence. This is not me calling names; this is me reminding you of your main point! Tim
8 --> Just so, directed contingency is not (a) simply reducible to or (b) equivalent to undirected, stochastic contingency. And to claim either of these is to reduce oneself to absurdity. 9 --> Similarly, while all viable aircraft designs had better play by the rules of aerodynamics thermodynamics, structural mechanics etc, aircraft are designed, they are neither simply chance or chance + necessity alone. they exhibit KNOWN, purposefully directed contingency. Vast, designer-specific and case-specific contingency in fact. While also showing commonalities of structure, materials etc reflecting underlying constrains of the forces and materials provided by nature, starting with atoms and the four general forces. 10 --> Perhaps, it is your case that designers, per evolutionary materialist thought, are themselves the product of chance + necessity, so their work reduces to chance + necessity. On pain of question-begging, that has to be SHOWN, not merely asserted; it is one of the issues in contention. 11 --> And ID simply says, we see known characteristics of design in every case where we know the causal story independently. Such as FSCI, IC, etc. So, on empiriclaly based inference to best (of course provisional) explanation, we can safely infer to design when we see such. 12 --> Moreover, we know that once the algorithmic storage, or the functional specification exceeds 1,000 bits, it is reasonable on search space reasons that functional configs cannot be accessed by chance dominated processes. 13 --> In the case of DNA, the observed storage starts at 600 k bits. But, one might be inclined to argue for biochemical predestination a la Kenyon in 1969. That failed empirically, and indeed the chaining chemistry and the informational function are essentially independent the one from the other. 14 --> Not to mention, the algorithms, the storage molecules, their interface molecules, the code and its interpretation, and the effecting machinery all have to come together in the right configuration at the right time for the system to work. Drop any one and it fails. Irreducible complexity. 15 --> But maybe DNA etc are written into the laws of our cosmos, so that necessity includes in effect a life program. Such has not been observed of course, but if it were observed, that would implicate something else: design of the cosmos as a whole to facilitate and then produce life. _________________ So, JT, you need to show to us now that you are engaging in serious mutual dialogue, rather than one-way, question-begging, strawman- knocking, closed-minded fallacious objection- spewing trollish monologue. GEM of TKI kairosfocus
JT, 64:
randomness by itself is a nonstarter . . . I.D.’s concept of intelligence is also a non-starter as it also equates to randomness.
1 --> JT, kindly show us just how design equates to randomness. 2 --> if you mean that randomness and design have in them high contingency, but of course. For, that is why both are in that sense opposite to mechanical forces. 3 --> Indeed, the explanatory filter looks at different aspects of an object or phenomenon, and where it sees natural regularity [a predictably determined outcome once certain inputs are there] it infers to mechanical, law-like force of necessity. (E.g. a dropped heavy object reliably falls. At the rate of 9.8 N/kg on earth.) 4 --> But, sometimes there are aspects that show high contingency instead. That is, under very similar initial conditions, we get quite diverse outcomes. E.g. if the dropped object is a 6-sided fair die, its uppermost side is quite contingent for practical purposes. (Thanks to: 8 corners, 12 edges and resulting sensitivity to initial and onward circumstances obtaining and it hits and tumbles. No practically feasible calculation will generally give the exact outcome reliably; at least that's what Las Vegas bets its income on. Quantum indeterminacy would only add to that.) 5 --> So, we can conceptually recognise that chance processes are those that exhibit credibly undirected, stochastic contingency. (E.g. a fair die distributes its outcomes across {1,2,3,4,5,6} with odds of 1/6 each. notice, a fair die can be integrated into a wider designed and even rules based context or a program. Dice are used in games, including e.g. table-top war games. And if we use a pair or a triplet, the summmed outcomes most certainly are NOT flat-even. [Are you willing to argue that the summed outcome is NOT a random variable? That clustering of micro-state outcomes to give recognisable macrostates of different probability resting on numbers of ways to get tot hem, is foundational to statistical thermodynamics, BTW.]) 6 --> But some dice are NOT fair, i.e they are loaded. So -- as I and others here have pointed out to you and others of your ilk repeatedly in recent weeks here at UD -- they show credibly directed, purpose-oriented contingency. That is, DESIGN. 7 --> As Las Vegas gaming houses know, loaded dice are NOT equivalent to fair ones, and dice- loaders are not equivalent to those who play with fair dice. Nor are such simply reducible the one to the other. [ . . . ] kairosfocus
But an AI system operates according to chance and necessity.
No an AI system operates according to its program under the constraints of the laws of physics. Chance may have a small role, but it would be very small and may only be apparent when something fails. JT what ID literature have you read? The following offers a list that you should read: Recommended Literature Pertaining to Intelligent Design Please check out that list and tell me which books you have read. Thanks... Joseph
JT, 79:
an AI program is chance + necessity. If necessity is above a certain threshold of complexity you evidently want to rename it “information”.
JT, AI programs are duly designed based on algorithms, coded and run on machines, debugged and eventually commissioned by PROGRAMMERS. They are observed to be artifacts of design. If an Ai program requires more than 1,000 bits of functional information -- about 128 bytes worth -- the number of configs that that capacity specifies is more than 10 times the SQUARE of the number of quantum states accessible by all the atoms of the observed cosmos across its credible lifetime. That is, we cannot sample more than 1 in 10^150 of the config space. That is a short program, and chance + necessity are not reasonably capable of accessing such functional complexity. Transferring to life, from first life, we observe DNA storing from 600k bits up, and increments of 10's - 100's of M bits to get to novel body plans. tha tis vastly beyond the reach of chance but is well within the reach of programmers. And if you want to say necessity of built in laws/forces of nature cut the odds, by directing the contingency, you are effectively saying that someone monkeyed with the laws of nature to set up life. (And, in the teeth of the evidence on DNA and amino acid chaining, which is not that sharply constrained -- it is the very flexibility of the chaining that makes them so useful for life forms! Cf where Dean Kenyon's 1969 biochemical Predestination thesis would up by 1984, and why Kenyon is now a design thinker. Check out the foreword by Kenyon to Thaxton et al's The Mystery of Life's Origin.) Pardon, your reductio ad absurdum is showing . . . GEM of TKI kairosfocus
You continue: “Randomness is impossible to predict, I.D. says intelligence cannot be predicted. For I.D. Intelligence is a magical black box that has “foresight” via some indescribable method, and can output functionally complex artifacts via some indescribable method. Maybe “magic” would be a more appropriate term.” Where did that come from? Do you use “magic” when you have a future goal of communicating a concept with someone on this board and then engineer a comment in order to attempt to acquire that future goal? Does an AI system use “magic” to model the future and then generate a target and steer itself toward that target — ie: a chess program samples future possible states and then follows the rules of the game while aiming at a future goal of winning the game according to specified rules.
OK so now you're saying an AI system is intellgent according to I.D.s defintion of the term But an AI system operates according to chance and necessity.
While it is true that conscious intelligence is not yet understood, why would you equate it with “magic?” I honestly have no idea what you are talking about when you use that term. Can you define it please? Non conscious intelligence is understood and programmers understand how it operates.
If they understand how it operates, or someone can potentially understand how it operates, then its not magic. Something that can be described by a program is not I.D.'s intelligence, unless I.D has changed their meaning of the term. Its evident you've changed the meaning.
Intelligence is describable by laws *plus* information.
I think you said somewhere that an AI program is laws plus information. an AI program is chance + necessity. If necessity is above a certain threshold of complexity you evidently want to rename it "information". Rename whatever you want. JT
CJYman, JT is a space cadet, a troll, a mindless wonder who spews forth idiocy just to be argumentative. He doesn't understand what intelligence is because he doesn't have any. Write him off. bFast
Hello JT, I may be getting a little snippy myself and I apologize in advance, but it is clear that you are confusing yourself and seem to have no idea what you are talking about, nor does it seem that you understand what you are criticizing. If you have questions, please ask, but I can get a little impatient when assertions sans evidence are flung around. You start by stating: “…because I.D.’s concept of intelligence is also a non-starter as it also equates to randomness.” I ask: "Can you please show how foresight…equates to randomness?" You continue on with: "Randomness means not governed by law. I.D. says intelligence is not governed by law." Yes, that is part of it. Randomness is either equated with something either being unguided, or being characterized by lack of pattern -- which is akin to lack of law (which describes regular patterns). This is why I later stated that "I understand how you may be saying that highly contingent patterns are characteristic of both intelligence and randomness." Highly contingent patterns, btw, are not characterized by law. Which you came back with: "No that’s not what I mean." Yet that's exactly what you stated above when you stated that both intelligence and randomness are not governed by law -- thus they will both produce highly contingent events. In light of this misunderstanding, I recommend you read through both of my responses again (they really are quite short, yet to the point). You continue: "Randomness is impossible to predict, I.D. says intelligence cannot be predicted. For I.D. Intelligence is a magical black box that has “foresight” via some indescribable method, and can output functionally complex artifacts via some indescribable method. Maybe “magic” would be a more appropriate term." Where did that come from? Do you use "magic" when you have a future goal of communicating a concept with someone on this board and then engineer a comment in order to attempt to acquire that future goal? Does an AI system use "magic" to model the future and then generate a target and steer itself toward that target -- ie: a chess program samples future possible states and then follows the rules of the game while aiming at a future goal of winning the game according to specified rules. While it is true that conscious intelligence is not yet understood, why would you equate it with "magic?" I honestly have no idea what you are talking about when you use that term. Can you define it please? Non conscious intelligence is understood and programmers understand how it operates. All that needs to be done is understand how to generate consciousness and then mix it with intelligence. JT: "I do not believe humans or any other animal operate via what I.D. calls “Intelligence” (i.e. magical process that is not actually characterizable or describable by laws.)" Why don't you use the definition that I used when describing intelligence and foresight? Why did you feel it was expedient to make up a straw man definition including a word -- "magic" -- which basically means "I don't understand how it works." Of course, we don't understand everything about how conscious intelligence -- conscious foresight -- works yet. Yet, we do understand that foresight (as I have defined it) does exist and is responsible for certain effects. What's your point? JT: "The most complex program in existance operates according to laws - not the laws of electricity or gravity but the laws of its own program." Intelligence is describable by laws *plus* information. Every example of AI has programmed instructions consisting of an organization of states not defined by any physical properties of the states. The organization of these states are also highly contingent. Thus, intelligence is not reducible to *only* laws. Although laws are a component, you are missing another highly contingent component -- that of these instructional states. By now you should understand that high contingency does is not characteristic of law, yet is characteristic of randomness and intelligence (foresight -- as I have defined it above). So, are the programmed instructions at the base of an intelligent system better characterized by previous intelligence (including foresight) or randomness? I've already explained this to you in many ways a few times before, but you just don't seem to be getting it. Are you seeing the potentially necessary intelligence - information - intelligence loop yet? I stated: "Furthermore, if you are asserting that randomness will produce the same effects as intelligence..." You responded with: "That’s not what I mean either. Well let me qualify that. I dont think randomness produces the same effects as humans, or the same effects as dogs or birds or gravity, because all these things are governed by laws. In the case of animals very complex laws, that is the complexity of their physical and chemical configuration and the physical laws governing chemical reactions and so forth." ...and these laws arise out of the non-lawful information at the very base of life. If you are not stating that randomness will produce the same effects as a foresight using system, then I have no idea what your point is since you would then have no problem with ID Theory as one could then separate the effects of randomness, law, and intelligence. Thus, the effects of intelligence are detectable. JT: "I’m not presenting some esoteric obscure concept. Why it would elude someone like Dembski for example is just an utter mystery. Is even responsive to such arguments? To the best of my knowledge, No - Just continues to blithely assume that “agency” is self evident. Same with Durston, et. al." Foresight is self evident, since we all experience it every day. We use our foresight to imagine a future goal that does not yet exist and then work to produce that goal. In many cases, these goals are neither best definable by law nor by randomness. CJYman
JT, Intelligence is not Randomness, though they share some characteristics as you've pointed out, but is akin to non-deterministic finite state automata. In the computer science underlying these, you see that one state may branch into two or more states even with the same input, so it is non-deterministic (like randomness.) But NFA's will always be assumed to take the path that leads to an accepted final state. In the same way, let's say you have a fork in the road. Determinism will say "Always take the left path." Randomness will say "I have a 50/50 chance of taking either path." Intelligence will say "I'll take the path that leads me to the castle (or the path that has a bridge over the river.)" That is as succinctly as I can put it. Hope that helps. And if an agent seeking accepted final states seems to require a notion of "purpose", then tell that to mainstream automata theory. (Furthermore, we shouldn't be surprised if Intelligence is defined with relation to Purpose...isn't teleology what the entire debate is about?) Atom Atom
JT, ID says that "intelligence" is not reducible to law, matter and energy. Also ID says there is no way to predict what any advanced intelligence will design. And it's not that the methods are indescribable, it is that the methods are a SEPARATE question. Joseph
bFast, My point with Dr Spetner is that the some/ many of the mechanisms Allen speaks of have been discussed by Spetner in 1997. I'm just saying that Allen should familiarize himself with the arguements he says he is refuting. But anyway, as far as I understand it (Spetner's arguement) wasn't that something could not evolve-unevolveable- it was that its evolution was not via an accumulation of genetic accidents- the "non-random evolutionary hypothesis" with "built-in responses to environemntal cues" being the main driver. And that is regardless of whether or not he is correct about some claim of a digestive enzyme. Joseph
JT, "Randomness means not governed by law." Huh? Dictionary.com: "proceeding, made, or occurring without definite aim, reason, or pattern." According to dictionary.com, randomness is the anithisis of foresight. bFast
CJYMan:
JT: “…because I.D.’s concept of intelligence is also a non-starter as it also equates to randomness.”
Can you please show how foresight...equates to randomness
Its very simple, really. Randomness means not governed by law. I.D. says intelligence is not governed by law. Randomness is impossible to predict, I.D. says intelligence cannot be predicted. For I.D. Intelligence is a magical black box that has "foresight" via some indescribable method, and can output functionally complex artifacts via some indescribable method. Maybe "magic" would be a more appropriate term. I do not believe humans or any other animal operate via what I.D. calls "Intelligence" (i.e. magical process that is not actually characterizable or describable by laws.) The most complex program in existance operates according to laws - not the laws of electricity or gravity but the laws of its own program.
I understand how you may be saying that highly contingent patterns are characteristic of both intelligence and randomness
No that's not what I mean.
Furthermore, if you are asserting that randomness will produce the same effects as intelligence
That's not what I mean either. Well let me qualify that. I dont think randomness produces the same effects as humans, or the same effects as dogs or birds or gravity, because all these things are governed by laws. In the case of animals very complex laws, that is the complexity of their physical and chemical configuration and the physical laws governing chemical reactions and so forth. I'm not presenting some esoteric obscure concept. Why it would elude someone like Dembski for example is just an utter mystery. Is even responsive to such arguments? To the best of my knowledge, No - Just continues to blithely assume that "agency" is self evident. Same with Durston, et. al. I don't know what more I could say at this point. (Sorry if I've gotten a bit snippy.) JT
bFast, Please contact me offline with the details of the Langor monkey calculations. (I have a contact form on my website, linked to my name.) I sometimes correspond with Dr. Spetner and would like to confirm if this is the case (as pass along the information to him, if it is.) Thanks, Atom Atom
Joseph, I personally am not a fan of Spetner or of "Not By Chance". The problem I have is that I tested the varacity of one of his claims, and found it wanting. He suggested that the reported mutations in the digestive enzyme of the langor monkey was vastly unevolvable. I got hold of the research he sited, did the math carefully, and found it to be not so. Further, the research had done a good job of presenting the math. The ID community does not have the privelage of being sloppy with its claims. Spetner has been sloppy with his claims. bFast
Dr. Spetner discussing transposons- page44 of "Not By Chance":
A transposon has in it sections of DNA that encode two of the enzymes it needs to carry out the job. The cell itself contributes the other necessary enzymes. The motion of these genetic elements to produce the above mutations has been found to a complex process and we probably haven’t yet discovered all the complexity. But because no one knows why they occur, many geneticists have assumed they occur only by chance. I find it hard to believe that a process as precise and well controlled as the transposition of genetic elements happens only by chance. Some scientists tend to call a mechanism random before we learn what it really does. If the source of the variation for evolution were point mutations, we could say the variation is random. But if the source of the variation is the complex process of transposition, then there is no justification for saying that evolution is based on random events.
IOW many of Allen's 47+ engines of change make up Dr Spetner's "non-random evolutionary hypothesis". And that is why I recommend he read the book- that is to find out what is being debated- at least as far as mechanisms go. Joseph
JT, Furthermore, if you are asserting that randomness will produce the same effects as intelligence (foresighted mechanisms -- modeling of the future to generate targets and then engineer law and chance to produce those targets), it's high time you put your money where your assertions are and provide evidence that background noise (mathematically measurable as statistical randomness) will at the very least produce CSI, or maybe an evolutionary algorithm, or active info, or functioning machinery, or anything that even vaguely resembles that which foresight is used to create daily. After all, if intelligence equates with randomness that should be no problem. Random.org would be a good place for you to start collecting data. (In accordance to your little story it is the ID proponents who are having the private, or not so private, chuckle re: randomness and intelligence). P.S. I'm getting a little tired going around in circles and having a little "assertion festival." This blog is about generating ideas and hypothesis and either providing evidence for them or providing ideas as to how one could test these hypothesis -- not attempting to argue "proof by assertion." CJYman
JT: "...because I.D.’s concept of intelligence is also a non-starter as it also equates to randomness." Can you please show how foresight -- your ability to envision a future goal, create a future target, plan to reach that target, and then sufficiently organize law and chance to construct that target (ie: construct a blueprint and then build a computer circuit) -- equates to randomness (mathematically defined as 'statistical randomness'). I understand how you may be saying that highly contingent patterns are characteristic of both intelligence and randomness, however that only means that a highly contingent pattern may be explainable in terms of either intelligence or randomness (chance) and thus further research is necessary to discover which of the two is the better explanation. But, that is no where near close to saying that intelligence equates with randomness as I've shown in the question I've asked above. CJYman
As for foresight as I have already told you the foresight is programmed into the organism- see Dr Spetner’s “Not By Chance”.
“as I have already told you” — Thanks. We needed the authority of your declaration. Now that we have it, the ID - Darwin debate has ended. Thanks again.
I was talking to Allen- you know the guy who thinks he can refute some argument without even knowing what that argument is. Joseph
As for Allen’s continuing to say tat euks evolved from proks via SET, there is also scientific data which demonstrates that proks “devolved” from euks- euks came first.
you have said this many times now, but have never provided any sources or citations for those claims, or even a summary of the evidence. could you please do so?
The ONLY time I didn't present the article tat supports what I said- actually it is the basis of what I said- is in this thread. But here it is again: Can evolution make things less complicated?
Instead, the data suggest that eukaryote cells with all their bells and whistles are probably as ancient as bacteria and archaea, and may have even appeared first, with bacteria and archaea appearing later as stripped-down versions of eukaryotes, according to David Penny, a molecular biologist at Massey University in New Zealand. Penny, who worked on the research with Chuck Kurland of Sweden's Lund University and Massey University's L.J. Collins, acknowledged that the results might come as a surprise. “We do think there is a tendency to look at evolution as progressive,” he said. “We prefer to think of evolution as backwards, sideways, and occasionally forward.”
OK if euks aren't a union of proks AND if euks were first on the scene (in any evolutionary senario), abiogenesis just got a bit more difficult to explain. And if life didn't arise from non-living matter via unintelligent, blind/ undirected (non-goal oriented) processes, there is no reason to infer its subsequent diversity arose solely via those type of processes. Joseph
Tim [62]:
“ ’Someday, I want…to fly.’ So an extremely simple goal resulted ultimately in a big complex artifact”.—JT Behaviors and artifacts based on future goals? Whose side are you on, JT?
You left out the first part: "Did Orville as a child sit around and daydream, Someday, I wish I could build a big complicated artifact of metal wire and wood and fabric and rubber.” No" The point being, that I.D. I think would tend to look at all the complex functionality of a plane when drawing a design inference. They wouldn't say, "this thing can fly, that proves it was intelligently designed." But as far as specific goals, the actual human goal was merely to fly. As the plane took shape its various physical attributes were essentially imposed upon the designers. At the end of it, I don't think the Wright Bros. were saying, "This is what I thought a flying thing should look like all along. What an incredible thing of beauty." No, in fact they probably succeeded by getting their own aesthetic goals and emotions and expectations out of it and only going where the evidence lead by meausuring the data, making adjustments, and so on. So in essence, I tend to look at the actual design as analytic, mechanical, laborious, determinstic. (You know, "genius is 99% prespiration...") The actual physical attributes of the plane had to be random with respect to the expectations they had at the beginning of the design process. So the idea is a very simple goal leading to a complex artifact. I am reminded of a brief description of evolutionary algorithms (which I actually haven't kept up with of late), wherein you can start with a simple goal that is specified, but the solution arrived at by the automated process has wildly complex attributes that were not anticipated at all by the programmers. But just to be clear - randomness by itself is a nonstarter. My impression is, that actually evolutionary theorist understand this and a good deal of them understand for example the significance of Dembski's work in this regard. However, a good deal of them get a private laugh (or not so private) about various I.D. advocates going on about "Intelligence" in the way that they do, because I.D.'s concept of intelligence is also a non-starter as it also equates to randomness. JT
. . . after studying the wrecked plane for several weeks started observing, “there are a lot of bicycle parts in here. This thing might have originated in a bicycle shop.” That would be a valid inference to a true fact.”—JT I disagree. The reason I disagree is that although you’ve allowed the farmer to be a little loose with his phrasing, “ . . . originated in . . . ,” such phrasing is not appropriate in science. What does he mean? Does he mean that based on what he found, the plane in his field was built in a bicycle shop? Or, does he mean that based on what he found, the plane was conceived in a bicycle shop? . . . built in a shop based on a bicycle shop? Etc . . . It is nonsense to believe that it “would be a valid inference to a true fact,” as long as you have the idea of “originated.” Nonsense, not because of the validity, the inference, or the facts, but because the statement is incoherent, i.e. lacking clarity. Consider the Wright brothers and their airplane. (I wonder if Dawkins used this “analogy” as a suggestive proof of evolution; no, he wouldn’t try something obviously misleading as a vague term, would he? He is famous and a professor and all that!!) When it eventually landed too hard somewhere (ooo-eee, maybe on a farm!) and was never flown again, an approaching farmer might make an inference after observing the bike parts, “This thing might have originated in a bicycle shop.” Ridiculous. First of all, a farmer would say, “Get that cr@p off my property!!!” Then, he would notice the parts and say, “That looks like it was built in a bicycle shop! Where are those stinking Wright’s? So help me . ..” And that’s the part of the story that is so interesting. It is when the farmer notices the engine, or really any novel part that doesn’t belong on a bicycle when he immediately turns not to any bicycle shop owner to complain about the mess on his rye, but specifically to the one bicycle shop that showed the innovation of combining novel structure in a new design that was meant to fly. “When you actual start looking closely at what we call human “design”, it looks an awful lot like evolution.”—JT I disagree. When I look at design, it looks unlike evolution. Just try to evolve a limerick! “. . . if human design is a physical process, that is, does not entail metaphysical attributes in humans, then human design is a physical process that results in complex artifacts.”—JT Here I guess we can assume that what you mean by metaphysical is any telic cause, or creativity, or innovation, or dare I say it . . inspiration. Of course, I disagree that human design is a merely physical process. You apparently adhere to this idea, feel free to explain. . . “And you can’t just throw out the long history as if it’s irrelevant to the Wright Bros, and bow to them as some sort of geniuses that materialized out of thin air to bestow on Man the gift of flight.”—JT On the contrary, the history of “flight” prior to the Wright brothers is critical in understanding the nature of their innovations. “ ’Someday, I want…to fly.’ So an extremely simple goal resulted ultimately in a big complex artifact”.—JT Behaviors and artifacts based on future goals? Whose side are you on, JT? Tim
I misread your comment somehow:
JT writes:
“Suppose someone were studying airplanes and bicycles from say the early 1910’s, and came to a conclusion, “It seems apparent that evolution of airplanes must have begun via a duplication of bicycle technology.” This would be a valid observation, regardless that an explanation for the evolution of bicycles wasn’t provided also.”
First, and in good nitpicking style, I point out that such a statement wouldn’t be an observation, but an inference, a minor point and perhaps not to be dwelt on except for the following. . .
What I meant was supposing some airplane crashed somewhere out in remote Montana, killing the pilot. And some old farmer spent a lot of time going over the wreckage, in awe. And he also happened to have an old bike in the back of the barn, and after studying the wrecked plane for several weeks started observing, “there are a lot of bicicycle parts in here. This thing might have originated in a bicycle shop.” That would be a valid inference to a true fact..
I agree it would be an inference, a valid and justified inference, so yes, observation wouldn't be the correct term (unless I meant someone read in the paper about the Wright Bros. bicycle shop or knew them personally which I didn't) . But valid inference is all that is relevant. Researchers don't observe a gene duplicating it, they infer it. JT
The whole thing about the Montana farmer probably came from Dawkins or someone. JT
Tim [56]:
JT writes:
“Suppose someone were studying airplanes and bicycles from say the early 1910’s, and came to a conclusion, “It seems apparent that evolution of airplanes must have begun via a duplication of bicycle technology.” This would be a valid observation, regardless that an explanation for the evolution of bicycles wasn’t provided also.”
First, and in good nitpicking style, I point out that such a statement wouldn’t be an observation, but an inference, a minor point and perhaps not to be dwelt on except for the following. . .
What I meant was supposing some airplane crashed somewhere out in remote Montana, killing the pilot. And some old farmer spent a lot of time going over the wreckage, in awe. And he also happened to have an old bike in the back of the barn, and after studying the wrecked plane for several weeks started observing, "there are a lot of bicicycle parts in here. This thing might have originated in a bicycle shop." That would be a valid inference to a true fact.
More importantly, though, is the notion that airplanes, actually airplane technology, “evolved”. Now, I’ll grant that JT is arguing by analogy...
No I do not mean it to be understood merely as an analogy. When you actual start looking closely at what we call human "design", it looks an awful lot like evolution. And also, if human design is a physical process, that is, does not entail metaphysical attributes in humans, then human design is a physical process that results in complex artifacts. It seems instructive to compare it to any proposed physical process purported to result in complex biological organisms. And also as a side note, as I'm thinking about it, nature does not eqaute to randomness. What we call 'nature' is tightly constrained. Wildly fantasic thing occur in nature, totally apart from what I.D. calls "intelligence", and even I.D admits that. But nature is not randomness. So to say nature created or designed something is not the same as saying randomness created it.
Even a cursory survey of the history of airplane technology shows that airplane technology does not evolve at all. I know little about TRIZ, but I can tell you that airplanes, and more importantly, the technology behind them, follow patterns of development that follow theories found in TRIZ — but are foreign to evolutionary thought.
I have never seen the acronym TRIZ before and haven't the slightest idea what its referring to. Of course I could google it. But on the evolution of planes, man has wanted to fly at least since the time of Icarus. And the development of planes was quite a comical and tragic process along the way. And you can't just throw out the long history as if its irrelevant to the Wright Bros, and bow to them as some sort of geniuses that materialized out of thin air to bestow on Man the gift of flight. If they were born in New Guinea, then they would have never mastered flight, regardless of how genius they were. But what was the Wright Bros. actual goal? Was it to build a complicated artificat of metal wire and wood and fabric and rubber? Did Orville as a child sit around and daydream, Someday, I wish I could build a big complicated artifact of metal wire and wood and fabric and rubber." No, he thought, "Someday, I want...to fly." So an extremely simple goal resulted ultimately in a big complex artifact. Actually, down through history, it resulted in a lot of big complex artifacts, ones that worked only marginally, if at all. But the various complex attributes of the Wright Bros. plane were imposed on them by nature itself. These attributes were deduced through countless hours of observation, calculation, and trial and error. But it wasn't their goal. JT
Oh, NOW I get it. . . and all this time my lack of imagination has kept me stupid. . . the next time someone calls me stupid, I'll just accept it as the scientific explanation it is . . . Tim
Tim, let me discuss the evolution of the airplane as it evolved from the bicycle. Lets start at the beginning. It is clear that the unicycle preceeded the bicycle. It only has one wheel, and no steering wheel system. The bicycle involves a simple gene-duplication to produce the two wheels. Even the piping that makes the bicycle already preexisted in the unicycle. The development of the peddal-chain system proves to pre-exist the bicycle, as some unicycles, to gain height, already developed the chain system. The seat is relatively unchanged. The steering wheel was a major evolutionary leap, but hardly irreduceably complex -- believe me. The bicycle was followed by the internal combustion motorcycle. Inside the engine are parts that were clearly coopted from the bicycle. They include a flywheel, crank shaft, (coopted from the pedal), and piston. The piston was coopted from the mechanism that clamps the handlebars to the downtube to the front wheel. The car was a mass dupication event from the motorcycle. There really is little difference between the car and two motorcycles side-by-side. The motorcycle is the common ancestor between the car and the airplane. We see, for instance, that most small airplanes have three wheels. This is the clear indicator of that separation. The airplane propeller was coopted from the spoke system in the first seen in the unicycle wheel. The jet engine is just an evolution from the airplane engine, where the propeller was repeatedly duplicated and modified. Any questions? As you can see, evolution is perfectly obvious. Anyone who doesn't accept the "theory" (fact) is obviously stupid. bFast
Am I just nitpicking? JT writes: "Suppose someone were studying airplanes and bicycles from say the early 1910’s, and came to a conclusion, “It seems apparent that evolution of airplanes must have begun via a duplication of bicycle technology.” This would be a valid observation, regardless that an explanation for the evolution of bicycles wasn’t provided also." First, and in good nitpicking style, I point out that such a statement wouldn't be an observation, but an inference, a minor point and perhaps not to be dwelt on except for the following. . . More importantly, though, is the notion that airplanes, actually airplane technology, "evolved". Now, I'll grant that JT is arguing by analogy, and because I argue by analogy often, I'll admit that I am loathe to nitpick the moment when the analogy breaks down as if that moment is of some crucial import. However, I am of the opinion that this analogy never gets started. Even a cursory survey of the history of airplane technology shows that airplane technology does not evolve at all. I know little about TRIZ, but I can tell you that airplanes, and more importantly, the technology behind them, follow patterns of development that follow theories found in TRIZ -- but are foreign to evolutionary thought. "It seems apparent that evolution of airplanes must have begun via a duplication of bicycle technology"? (my bolds) I am not even sure that that is a coherent statement. If it is, then what is meant by a duplication of technology? I am lost here. I began with the big nitpick on observation/inference. Line up old bicycles through modern and old planes through modern and you will be tempted to make some observations about evolving technology. However, I must insist that those are merely inferences, and furthermore, I must remind that TRIZ lurks . . . . . . probably just nitpicking Tim
Joseph:
As for foresight as I have already told you the foresight is programmed into the organism- see Dr Spetner’s “Not By Chance”.
"as I have already told you" -- Thanks. We needed the authority of your declaration. Now that we have it, the ID - Darwin debate has ended. Thanks again. The front-loading hypothesis, a subset if ID certainly posits that the foresight was programmed in. Many IDers hold to the front-loading hypothesis. I, for one, do not accept that front-loading explains it all. It may be a factor, it gets good support from evidence that there is an additional preservative in DNA beyond natural selection. However, I am not prepared to conscede that there is enough storage capacity in single-celled life to have programmed the varieties that followed. That said, if front-loading were validated, then foresight would be established. If foresight were established Allen_MacNeill would jump ship and become an IDer. Right, Allen_MacNeill? bFast
Joseph: 1] Not all legitimately "random" variables or functions/distributions are "flat." 2] Non-foresighted -- thus, at-random -- redistributions of DNA base sequences do not account for the origin of the sequences [esp at novel body-plan level, starting with first life], or their functionality or the underlying codes/computer language and algorithms and data structures, or the associated interfacing and processing nanomachinery. GEM of TKI kairosfocus
Joseph,
As for Allen’s continuing to say tat euks evolved from proks via SET, there is also scientific data which demonstrates that proks “devolved” from euks- euks came first.
you have said this many times now, but have never provided any sources or citations for those claims, or even a summary of the evidence. could you please do so? Khan
Joseph, If what you say is true, then the NCSE is not in touch with current evolutionary biology theory or at least a major segment of it. Most of the changes to the genome according to this segment are due retroposition. This can be due to as little as a couple nucleo acids or to long strings being inserted back into the genome at random places. This is part of the theoretical basis for punctuated equilibrium and why there is stasis and then sudden changes. Since Allen is at Cornell I would believe he is under the influence of those who adhere tof this theory. jerry
Are Mutations Random?
The statement that mutations are random is both profoundly true and profoundly untrue at the same time. The true aspect of this statement stems from the fact that, to the best of our knowledge, the consequences of a mutation have no influence whatsoever on the probability that this mutation will or will not occur. In other words, mutations occur randomly with respect to whether their effects are useful. Thus, beneficial DNA changes do not happen more often simply because an organism could benefit from them. Moreover, even if an organism has acquired a beneficial mutation during its lifetime, the corresponding information will not flow back into the DNA in the organism's germline. This is a fundamental insight that Jean-Baptiste Lamarck got wrong and Charles Darwin got right. However, the idea that mutations are random can be regarded as untrue if one considers the fact that not all types of mutations occur with equal probability. Rather, some occur more frequently than others because they are favored by low-level biochemical reactions. These reactions are also the main reason why mutations are an inescapable property of any system that is capable of reproduction in the real world. Mutation rates are usually very low, and biological systems go to extraordinary lengths to keep them as low as possible, mostly because many mutational effects are harmful. Nonetheless, mutation rates never reach zero, even despite both low-level protective mechanisms, like DNA repair or proofreading during DNA replication, and high-level mechanisms, like melanin deposition in skin cells to reduce radiation damage. Beyond a certain point, avoiding mutation simply becomes too costly to cells. Thus, mutation will always be present as a powerful force in evolution.
Joseph
As for Allen's continuing to say tat euks evolved from proks via SET, there is also scientific data which demonstrates that proks "devolved" from euks- euks came first. So why does Allen refuse to even consider that scenario? Joseph
For Allen MacNeill: Evolution 101: Mutations are random The NCSE says that the major cause of mutations is copying errors-Mutation defined As for foresight as I have already told you the foresight is programmed into the organism- see Dr Spetner's "Not By Chance". Joseph
OK, Dar-evo skeptic. tribune7
tribune7, Aren't we all that are regulars here "evo skeptic?" Actually I am not. I believe evo happened. I am just skeptical of how it happened. jerry
Jerry, I prefer the term "evo-skeptic". tribune7
Allen, You may not know it but you just gave the farm away. In order to contradict my assertion that the 47+ engines of change have not produced macro evolution you made a big point of endosymbiosis. But what was telling was the lack of other examples. For someone who teaches evolutionary biology, is writing a book on it and claims to be on the cusp of what is cutting edge in evolutionary biology, there was an amazing lack of barking about macro evolution (our understanding of it.) One would have thought with all these scientists, and all these books and all these years there would have been just one tiny multi cellular example to flaunt at us. No we get endosymbiosis. Which may or may not be a macro evolutionary event. We can discuss this one in particular but even if we come to the conclusion that it could be, it is only a spec of an oasis in a barren desert. We need a blooming botanical garden of examples for a theory with such overwhelming evidence that one has to be deranged to question it. And by the way why did you identify tribune7 as a creationist on your blog? You have no way of knowing this. I have watched tribune7 comment for a couple years and wouldn't identify him as a creationist. He may be but you would have to ask him before you assume. jerry
bFast (#44): "I contend, the intelligent design community contends, that foresight was necessary to produce life as we know it." Of course. There is no such thing as unguided material causation operating in nature. Life, or species, appear designed. The appearance, coupled with organized complexity seen in every aspect of nature, corresponds directly to the work of Intelligent causation, not unintelligent causation (= unguided material). Darwinism makes no sense while existing in a state dependent upon illogic. This is what happens when God is excluded. Ray R. Martinez
R. Martinez:
Why is Allen attempting to distance evolutionary mechanisms from the concepts seen in “random” and “accidental”? Answer: He is attempting to persuade the naive undecided Theist reader into believing that evolution is friendly to Theism worldview.
I think you misunderstand Allen_MacNeill on this one. I would say rather that he is distancing himself from the concepts of random and accidental in attempt to convince the unlearned community that IDers are actually idiots, and that there really is much more to darwinian theory than there really is. While in the splitting of hairs he is correct that Darwinian theory counts on some stuff that is cyclical rather than "random" and on some mechanisms that were supposedly developed via chance + necessity, none of this is beyond the recognition of the main ID community. We just find it difficult to give a complex discussion of the subtleties of RM every time we use it. It remains, according to darwinian theory there are only two forces at work in biology -- variation that has no "intention", no foresight, no strategy for developing biology, and "natural selection", the one and only filter that separates the successful from the unsuccessful. (Yes, Allen_MacNeill, I consider sexual selection to be a subset of natural selection.) Neo-Darwinian theory, therefore, is fully summarized as non-foresighted variation filtered through natural selection (NFV + NS). I contend, the intelligent design community contends, that foresight was necessary to produce life as we know it. bFast
bFast (#41): "R. Martinez, your link doesn’t work. You need to post the full address not the version with the elipses (…)." Thanks. I mindlessly copied and pasted Allen's quote which included the link in its inoperable form. For what it is worth here is the working link to Allen's essay: http://evolutionlist.blogspot.com/2007/10/rm-ns-creationist-and-id-strawman.html Thanks for pointing out my error. Ray R. Martinez
Allen_MacNeill (#37): "As to the assertion that all of the 47+ mechanisms listed in my blog are 'random' or 'accidental', this is simply not the case. On the contrary, a large percentage of these mechanisms are the result of processes that are not 'random' by any reasonable definition of that term. I have repeatedly been very careful to point this out, but that clearly has been missed by some of the commentators here...." The above paragraph says "a large percentage" of evolutionary mechanisms are NOT random "by any reasonable definition of that term...." Allen continues: "....It is also not the case that the 47+ processes are not 'guided'. Indeed they are 'guided', by the various internal and environmental forces that produce both the variations and the various evolutionary mechanisms that operate upon them [long list of these alleged mechanisms omitted]." Allen places single quote marks around the word guided like this ('guided') to indicate that these alleged mechanisms have no connection to Guide or Intelligence. Then he says that these mechanisms are 'guided' "by various internal and environmental forces." In other words, inanimate matter. Allen is saying that these "various internal and environmental forces" behave in a guided non-random manner. He is assigning, by assertion, properties that belong exclusively to Mind and Intelligence to inanimate matter. If these forces are not guided then they must be random since unguidedness and unpredictability correspond. Since Allen has admitted that the mechanisms have no connection to Guide or Mind they, by definition, MUST be random and accidental since inanimate matter has no mind and is unaware of its own existence. Why is Allen attempting to distance evolutionary mechanisms from the concepts seen in "random" and "accidental"? Answer: He is attempting to persuade the naive undecided Theist reader into believing that evolution is friendly to Theism worldview. Allen recognizes that the concepts seen in "random" and "accidental" prevent Theists from accepting evolutionary theory because no Theist can look at nature and conclude that it happened by random, chance, or accident. When and if said Theist accepts evolutionary theory, its alleged mechanisms remain random and accidental based mechanisms. Ray R. Martinez
R. Martinez, your link doesn't work. You need to post the full address not the version with the elipses (...). bFast
Allen_MacNeill (#19): "As for the old “RM & NS” strawman, please go here: http://evolutionlist.blogspot......awman.html ID supporters are arguing against a version of evolutionary that was almost fifty years out of date by the turn of the millennium, which they would know if they actually had any training in the science of evolutionary biology." According to the modern theory since Darwin AND the biological synthesis, RM + NS is the main (but not the exclusive) mechanism causing biological production. Your essay simply protests the lack of a more robust description of RM + NS while blaming anti-evolutionists. No one is obligated to include reams of jargon when simply referring to the main mechanism. When RM + NS is alluded to this simple way, your three prerequisites are inclusively presupposed. Since Darwinists do the same, that is, refer to RM + NS the exact same way you literally have no point, Allen. Ray R. Martinez
Allen_MacNeill:
That said, however, it is also demonstrably the case that none of the mechanisms listed above can be shown empirically to be “foresighted”.
Are you agreeing with me that if the ID community changes its summary of naturalistic evolution from RM + NS to NFV + NS, that we have, in your opinion, entered the modern biological world? Are you in agreement that if foresight were demonstrated in the evolutionary process, it would obligate you to become an IDer? (Please note post #31 in the "Increased Oxygen = Increased Biological Information..." thread for examples where foresight seems to be the best explanation for certain data, and tests to confirm the existance of foresight.) I personally am of the mind that foresight is the key feature of intelligence that the ID community should be focused on. I think it reasonable to define intelligence as foresight even though intelligence has other attributes. Allen_MacNeill:
Indeed, the whole idea of “foresightedness” in natural processes seems to me to violate several very well-established principles of physics, including the Second Law of Thermodynamics.
Are you suggesting that foresight is an impossibility? bFast
Allen, I'm curious as well about #35. Where is the actual proof of this? It's just a story. ellijacket
As to the assertion that all of the 47+ mechanisms listed in my blog are "random" or "accidental", this is simply not the case. On the contrary, a large percentage of these mechanisms are the result of processes that are not “random” by any reasonable definition of that term. I have repeatedly been very careful to point this out, but that clearly has been missed by some of the commentators here. It is also not the case that the 47+ processes are not “guided". Indeed they are “guided”, by the various internal and environmental forces that produce both the variations and the various evolutionary mechanisms that operate upon them (i.e. natural selection, sexual selection, founder effects, genetic bottlenecks, neutral “drift” in deep evolutionary time, exaptation, heterochronic development, changes in homeotic development, interspecific competition, species-level selection, serial endosymbiosis, convergence/divergence, hybridization, phylogenetic fusion, background and mass extinction/adaptive radiation, and internal variance). That said, however, it is also demonstrably the case that none of the mechanisms listed above can be shown empirically to be “foresighted". Indeed, the whole idea of “foresightedness” in natural processes seems to me to violate several very well-established principles of physics, including the Second Law of Thermodynamics. How can any natural process be empirically shown to be genuinely “foresighted"? Do rocks fall “in order to” reach the ground? Do gas molecules move “in order to” produce the phenomena we describe with Boyle’s Law? Do the electrons in the valence energy shells of hydrogen and oxygen form shared couplets “in order to” produce water? Do particular genenetic changes happen “in order to” produce phenotypic changes that have no effects on organisms’ survival and/or reproduction now, but might have in the future? And how can anyone show any of these to be the case? It is important to note that the terms “foresighted” and “goal-oriented” are not equivalent. The latter term is entirely compatible with both physics in general and evolutionary biology in particular. Indeed, the genomes of all living organisms are “goal-oriented programs” (as most clearly pointed out by Ernst Mayr), in that they organize and control the assembly and operation of the living organisms for which they code. However, the processes by which such genomes have come into being (i.e. the 47+ mechanisms listed here, operating through the various mechanisms of micro- and macroevolution listed above) have not been empirically shown to be either “foresighted” nor “goal-oriented". It seems to me that this would be extremely difficult, if not impossible to do. What kinds of empirical observations could one conduct that would unambiguously verify today that some component of an existing organism’s genome or phenome was present in that organism now because at some point in the future it might become necessary for that organism’s survival and/or reproduction? Clearly, once an organism has survived and/or reproduced one can point to its various attributes and say “yes, that attribute appears to have contributed to the organism’s survival/reproduction". However, that is no more evidence of “foresightedness” than a lottery winner saying “I chose these lottery numbers (or bought those particular scratch-off tickets) because I knew they would be winners". This is known as the “fallacy of affirming the consequent” (also called post hoc, ergo propter hoc argumentation) and is logically inadmissible in the natural sciences. Allen_MacNeill
Allen_MacNeill:
serial endosymbiosis ... resulted in the origin of the Domain Eukarya around 2 billion years ago.
How cool. Has anyone moved this above the level of hypothesis, above the level of "just so story"? As prokaryote evolution has been virtually non-existant for the last 2 billion years, the raw materials surely still exist. A little labwork should be able to produce a serial endosymbiosis event that creates a new eukariote. Now that would be some serious support for naturalistic evolution. BTW, please limit the experiment to no more than two simultaneous variation events, as we know that this is the extreme edge of evolutions prowess. bFast
In #33 jerry wrote:
"One thing that Allen has not been able to do is point to any meaningful macroevolution from his 47+ (since I don’t know how many there are now) engines of variation."
On the contrary, I have done exactly this. As just one example, here is a link to a recent article on my blog: http://evolutionlist.blogspot.com/2009/02/macroevolution-examples-and-evidence.html In it, I describe how a well-studied macroevolutionary mechanism – serial endosymbiosis (#40 in the list of mechanisms that produce phenotypic variation, found here: http://evolutionlist.blogspot.com/2007/10/rm-ns-creationist-and-id-strawman.html) – resulted in the origin of the Domain Eukarya around 2 billion years ago. There are many other examples, some of which I am including in my new evolution textbook (for non-scientists), scheduled for publication in 2010. Allen_MacNeill
Joseph, I agree about migration, but I think that it is true that migration probably needs knowledge. I would be willing to admit that there could be some very small chance that some sort of automatic migration is possible. Collin
Joseph, In some of the stuff Allen has recommended in the past, are studies/theories that say that some mutations are not random. In some of them only certain types of mutations will happen and I believe this is currently limited to single celled organisms. In other cases there are those who propose that environmental pressures increase the mutation rate and thus the likelihood for change. Allen is not so absolute and covers all his bases so if one accuses him of something then he can call you wrong and an ignoramus. One thing that Allen has not been able to do is point to any meaningful macro evolution from his 47+ (since I don't know how many there are now) engines of variation. Also you have to know that Allen must keep up his anti ID bona fides or else he will be cut off at the knees not only at Cornell but at any place in the evolutionary biology world. jerry
JT, Saying "instintive" is just another way of saying "we don't have any idea". The point about migration is INFORMATION. As where did the INFORMATION for migration come from? To migrate takes KNOWLEDGE. You have to know where you are, know where you want to go and know how to get there. And there isn't anything scientific which demonstrates such knowledge can be reduced to chemical reactions. Joseph
To Allen MacNeill, In your scenario every mutation is a genetic accident. And therefor "evolution" occurs through an accumulation of genetic accidents. From the “Contemporary Discourse in the Field Of Biology” series I am reading Biological Evolution: An Anthology of Current Thought, edited by Katy Human:
The old, discredited equation of evolution with progress has been largely superseded by the almost whimsical notion that evolution requires mistakes to bring about specieswide adaptation. Natural selection requires variation, and variation requires mutations- those accidental deletions or additions of material deep within the DNA of our cells. In an increasingly slick, fast-paced, automated, impersonal world, one in which we are constantly being reminded of the narrow margin for error, it is refreshing to be reminded that mistakes are a powerful and necessary creative force. A few important but subtle “mistakes,” in evolutionary terms, may save the human race. -page 10 ending the intro
As for attacking a strawman- YOU would know about that. 1- No one insists on the fixity of species 2- No one argues against the macro-evolution you are using. Joseph
2) agency, whatever that agency is but always something other than nature, operating freely.
To distiunguish design from nature is just a dead end, IMO.
Umm THAT is the whole point of EVERY design-centric venue. IOW EVERY design-centric venue seeks to do just that- separate nature, operating freely from agency involvement. I have over 40 years of such experience. How much experience do you have? Joseph
Allen and bFast: I reviewed Allen's 'Sources of Heritable Variation' on his Website. Since these variations are heritable, they must involve changes to genomes. So, I think the basic thrust of bFast in #25, points 1-3, is correct - let's not get bogged down in equivocations about point mutation vs other random genetic changes. It all boils down to the classical 'random mutation,' though not in the simplistic way brandished like a club by creationists. (And I think it would be felicitous if you would both leave out the argumentative rhetoric.) Adel DiBagno
Thanks Collin. There's a lot of people here more knowledgable than me, BTW. whoisyourcreator [3]:
Gene duplication is just another fantasy that has never been proven to occur. Here are the problems:
To me, gene duplication by itself seems like sort of a modest claim. Its hard to believe that someone would be claiming that gene duplication itself is some insurmountable challenge, some absurd preposterous, fly-in-the-face-of-reason task for a gene to be duplicated by accident.
There is NO scientific proof that gene duplication can create genes with more complex functions. Research papers reflect this admission by using words “most likely”:
So this would be a different objection you're making. However it seems apparent that a gene duplication itself does not create a new function (it merely duplicates something). So it seems you wouldn't actually need a proof for that. Also, gene duplication with subsequent modification does not seem like some completely new, outlandish speculation that's being foisted on the public of late. Rather it seems right in line with the traditional evolutionary viewpoint, wherein an entire organism is duplicated, only with some slight variations. (I'm not going to have time to go through all your sources this evening, unfortunately) ...
Also, what Darwinists fail to present is a feasible step-by-step scenario how each gene could: - split their functions in a precise manner so that neither function would be disabled until ‘random chance’ completed the event; - become fixed in the population during each new step:
I think both sides have used the strategy of demanding a complete step-by-step explanation from the other side. I.D. for their part tends to respond we don't have to know anything about the process to know it was design. To me though, any act of design would be a process - a long convoluted process involving many participants. Why shouldn't we expect there to be a long drawn out error-prone process in the "design" of living things as well? The only reason I can think of is that this would seem to many I.D. advocates as an insult to their concept of an of an infallible all-powerful God, a God who can just materialize things instantaneously with no errors or no thinking at all. Well, why call that "design" if that's how it happened? Design to me is inherently a search process, i.e. searching for something that meets some criteria. But only fallible limited things have to search. To me, it is a fallible universe that is doing the searching, and what it ultimately converges on, what persists ultimately, says something about the nature of God, but not that God is actually doing the designing or searching himself. I mean I would say that there is front-loading of information if I understand that correctly, but the front-loaded information would be infinite, not specific. Maybe the idea of the universe doing the design is wrong frankly. But it seems like a coherent and understandable perspective to me. As far as step-by-step explanations, histories of the early germanic tribes are highly speculative but far from worthless. JT
There are no accepted definitions of the terms, life, species, science and intelligence within the scientific community. And in reference to evolution, there is no accepted definition of macroevolution and it is not quite clear just what is microevolution is. And the word evolution has several definitions so the chance that two people having a discussion on evolution and using the same definition is low. jerry
JT Your posts always give me a lot to think about. I think you are right if you are saying that the term intelligence needs to be more rigorously defined. Here are some things that I think of when I think of intelligence 1. Ability to recognize patterns 2. Ability to recreate patterns 3. Understanding symbols and abstract notions 4. some ability to predict the future based on experience. I'm sure psychologists and philosophers would add a great deal to my little list. But I think that whether or not intelligence is a physical mechanism or not, it can be identified (probablistically) in nature. While you are right to point out that the combustion engine was not all invented by one person, each 'mutation' probably involved some of the characteristics of intelligence I listed above. Language, motor vehicles etc may have evolved with a lot of randomness and physical mechanisms, but they also include a lot of ingenuity, understanding, knowledge, creativity, abstract representation, and imagination. These things can lead to design in nature and we can detect that design without having to explain just what the designer was thinking when he/she/it implimented it or even what the exact mechanisms of that implimentation were. I can tell that Mt Rushmore was designed without telling you how. Same with the computer I am using. Collin
Allen_MacNeill:
ID supporters are arguing against a version of evolutionary that was almost fifty years out of date by the turn of the millennium, which they would know if they actually had any training in the science of evolutionary biology.
Oh please. Not this kanard. We have hashed the death out of this thing ten times before. In your honor, your "engines of evolution" are frequently referenced. There is noone serious on this site that simply thinks "point-mutations" when we reference RM. However, Allen_MacNeill, I understand that the last time we hashed this thing out we agreed on the following: 1 - There are many more "random" mutational events than point mutations, and that the a random event would include events such as HGT and even fusions. 2 - There are "random" events that are not mutations, such as asteroid strikes, volcanoes, etc. 3 - These "random" events may not be truly random, that they may have some predictability. However, we agreed, if I recall that they are non-foresighted, that they occur without benefit of an evolutionary strategy. 4 - Let me also caviat that it is not outside the scope of the evolutionary model to develop strategic evolutionary advantages. For instance, if a mechanism were discovered that played a role in using HGT to spread a mechanism of viral resistance, such a mechanism would still be within the valid scope of the current modern evoutionary theory if it were developed via the naturalistic evolutionary process. In short, it is evolutionary for evolution to have developed mechanisms. Now, I have chosen the acronym NFV (non-foresighted variation) rather than RM because of our earlier discussion. However, as many on this site frequently site your "engines of evolution" when defining the RM or RV part of the formula, I find it disgusting and childish that you would mock us with the post that you referenced above. Further, Allen_MacNeill, I have no power on this site. But I remind you that this site, though a bit more tolerant now than it has been, has very few evolutionary scientists posting on it. The king of bs that you referenced above is the kind of bs that may well see more tolerant move to less tolerant. If you want to see balance on this site, if you want the truth of the evolutionary theory also represented on this site, it bodes you well to not spend your energy calling us idiots that are 50 years out of date. bFast
Joseph wrote [1]:
- In his book “Why is a Fly Not a Horse?” Giuseppe Sermonti has a chapter (VIII) titled “I Can Only Tell You What You Already Know”, which examines this very thing- how do organisms “know” to migrate and to where?
An experiment was conducted on birds-blackcaps, in this case. These are diurnal Silviidae that become nocturnal at migration time. When the moment for the departure comes, they become agitated and must take off and fly in a south-south-westerly direction. In the experiment, individuals were raised in isolation from the time of hatching. In September or October the sky was revealed to them for the first time. Up there in speldid array were stars of Cassiopeia, of Lyra (with Vega) and Cygnus (with Deneb). The blacktops became agitated and, without hesitation, set off flying south-south-west. If the stars became hidden, the blackcaps calmed down and lost their impatience to fly off in the direction characteristic of their species. The experiment was repeated in the Spring, with the new season’s stars, and the blackcaps left in the opposite direction- north-north-east! Were they then acquainted with the heavens when no one had taught them?
The experiment was repeated in a planetarium, under an artificial sky, with the same results! The bottom-line is there is much more going on than just chemical reactions caused by genetic material. But that reduction is all the evolutionists have and I say it hampers investigations by preventing us from seeking answers outside of the genome.
I don't see how it follows from the above that there is much more going on than just "chemical reactions caused by genetic material". ID adovcates are often pressed to give an account of the designer's process, and although they are often deriding the other side for "just so" stories, one from you would be welcome here. Would the designer say, "I'm going to design these bird so that when they see a particular pattern of stars it will cause them to fly in a south-south-westerly direction, which will be advantageous for them because it will bring them right to sutiable nesting sites for the Winter." Note that even if our designer was planning that way, the birds actual behavior would still be a result of chemical reactions caused by genetic material . Its still purely instinctive, right? That's how some respondants to me have explained away the rudimenary languages of various non-human animals for example, by saying its all instinctive (as opposed to behavior exhibited by humans). Here's my just-so story: Most birds have outstanding vision and most birds are very vision-oriented as well. At some point in the distant past, there were a variety of flight behaviors being triggered completely involuntarily and randomly by the visual data the birds were taking in from the night skies. Some headed off north, pulled by some inexorable force they could not resist. They were all dead though in a few generations. Some were inexorably pulled south and a good thing for them too, because it took them to suitable winter nesting grounds and so they passed on their genetic material and thus thrived. Those who had a tendency to fly east or west or stay put also dwindled to exinction. JT
"ID supporters are arguing against a version of evolutionary that was almost fifty years out of date by the turn of the millennium, which they would know if they actually had any training in the science of evolutionary biology." Talking about strawmen. That is one of most egregious uses of the concept I have seen here. I once compared Allen to the headless horseman who rides in quickly and then rides out. But now I am beginning to think Allen has been spending too much time with the scarecrow from the Wizard of Oz. Our thing is information and the ability to generate new meaningful information and how that contradicts all current and past versions of the evolutionary synthesis. Don't use that charade of rm + ns is all we know which by the way still describes nearly all of the latest synthesis. It depends on how you define the "rm" part. Get up to date Allen. It is embarrassing for us to have to keep correcting you here by using out dated 20th century concepts of ID. jerry
There are a large number of evolutionary biologists who explain new capabilities by duplication of genome elements, not just genes. The genome contains a large number of segments that are thought to be the result of reverse transcription or retroposition. These genetic elements remain fallow for millions of years and mutate and avoid selection. Then magically a small number of these elements become functional. This is how novelty is introduced and why one sees the phenomenon of punctuated equilibrium. I believe this simply states the theory. It is not just gene duplication but a lot of other stuff that gets duplicated and inserted back into the genome by retroposition. For example: http://exppc01.uni-muenster.de/expath/articles/Genetica.retro.2003.pdf This is why the work of Dembski and Marks and Kirk Durston is important in evaluating this type of claim. I believe this is now the cutting edge of what evolutionary biologists think caused novelty in the genome. So this is what should be debated. jerry
While we're on the subject of gene duplication, there is very strong evidence that the origin of vertebrates involved the double duplication of the entire ancestral vertebrate genome. That is, the ancestral chordate genome was duplicated twice, producing a genome with three copies of the entire information for assembling and operating the organism. After this happened, all three copies "drifted", generally with at least one copy of each gene (among the three) remaining relatively stable (via stablizing selection), while the other two copies drifted into new configurations, many of which became adapted to other functions. This is easily seen in the detailed structure of the vertebrate genome. It's a lot like viewing the html code for a long post or template, and seeing where the writer copied, pasted, and modified the pasted copies, rather than write all new code from scratch. And, before you jump and say "that's proof that it was coded by an 'intelligent coder'", consider that much of the duplicated code is pure, non-adaptive nonsense; not transcribed, not translated, and in many cases clearly composed of degenerate copies of genes that no longer have any detectable function. In other words, the "coder" could copy and paste, but had no way to delete code that wasn't necessary, and made many mistakes besides. Kind of like copying and pasting "cdesign proponentsists"... Allen_MacNeill
BTW, the list of known mechanisms that cause evolutionary changes in phenotypes is now much longer than when I posted that article at my blog. People keep sending me new research reports, and I run across new mechanisms in my own reading. That's part of being a responsible participant in any ongoing academic field; keeping up with the literature. I intend to post an updated version every six months or thereabouts, so if you're interested in this subject, check in periodically at http://evolutionlist.blogspot.com/ for updates! Allen_MacNeill
As for the old "RM & NS" strawman, please go here: http://evolutionlist.blogspot.com/2007/10/rm-ns-creationist-and-id-strawman.html ID supporters are arguing against a version of evolutionary that was almost fifty years out of date by the turn of the millennium, which they would know if they actually had any training in the science of evolutionary biology. Allen_MacNeill
Joseph [10] wrote: And I would say the point is this allged gene duplication appears to be nothing more than an ad hoc narrative gloss, as the issues with gene duplication have been posted and JT seems to be ignoring. I'll take a closer look at yours and wiyc's original comments in a bit and respond. JT
Sorry, JT, I didn’t mean for you to go to a lot of trouble.
No, no problem. I actually went out to walk the dog, and it took me 30 seconds to find the above.
You talking to me? No argument, man. I just asked a question. Curiosity, you know?
That entire post [15] was Allan Macneil's. Original thread: https://uncommondescent.com/intelligent-design/darwin-reader-darwins-racism/ [Anchor tags are not working for me.] JT
Sorry, JT, I didn't mean for you to go to a lot of trouble.
According to Lynn Margulis, the conversion of a one-celled organism into a simple eukaryote didn’t require any mutations at all. Instead, it was the result of what is now commonly referred to as “serial endosymbiosis”, whereby the fusion of two prokaryotic cells formed the ancestor of all eukaryotes
That's a fine theory, but I doubt that the product of the fusion would have survived selection without some other changes, like mutations? (As long as we're speculating.)
So, your lame attempt to invoke the standard “RM & NS” strawman argument doesn’t even begin to address the issue of the origin of eukaryotes, which you would have known if you had even a passing acquaintance with modern evolutionary theory.
You talking to me? No argument, man. I just asked a question. Curiosity, you know? I also heart Margulis. Adel DiBagno
Adel [13]: Allan_MacNeil wrote [UD 2/14/09]: According to Lynn Margulis, the conversion of a one-celled organism into a simple eukaryote didn’t require any mutations at all. Instead, it was the result of what is now commonly referred to as “serial endosymbiosis”, whereby the fusion of two prokaryotic cells formed the ancestor of all eukaryotes. So, your lame attempt to invoke the standard “RM & NS” strawman argument doesn’t even begin to address the issue of the origin of eukaryotes, which you would have known if you had even a passing acquaintance with modern evolutionary theory. BTW, Lynn Margulis is widely recognized as perhaps the most important female evolutionary biologist of the 20th century, and one of the top ten evolutionary biologists of all time, regardless of gender. And she completely rejects the “modern evolutionary synthesis” as an inadequate explanation of the major macroevolutionary transitions, such as the evolution of eukaryotic cells. And I, along with many other evolutionary biologists, agree with her. Why? Not only is her theory of serial endosymbiosis comprehensive and elegant, it has what ID “theory” completely lacks: a mountain of empirical evidence, amassed over four decades of hard, painstaking work by Margulis and her colleagues. JT
Adel: It seems more and more commonplace for pro-evo establishment folks to say RM-NS is far from being the complete picture. If you really are demanding evidence then I guess I'll take another half hour or so and hunt down some quotes for you. JT
JT:
Another observation, if I.D’s goal is the overthrow of randomness as an explanation it seems that they’re slowly acheiving that, as there has been an increasing number of remarks both in this forum and elsewhere by evo-theorists distancing themselves from RM-NS by itself as an explanation.
That's interesting, I missed the distancing remarks by evo-theorists. Who are they and what did they say? Adel DiBagno
Dawes = Deyes JT
Joseph [10]:
Umm duplicated bicycle technology could not produce an internal combustion engine. Also the debate is directed vs. undirected processes.
Yes, the internal combustion engine would have had a seperate incremental developmental process extending back centuries, originating presumably in patrician greek philosophers fooling around with boilers or some such. And of course whomever first developed the first rudimentary recognziable combustion engine, would have been exploiting various necessary technologies which he wouldn't have had the slightest idea how they functioned. He probaly didn't know anything about mining or refining ore for example, or oil exploration or refining, or the complex distribution processes necessary to makes such resources readily available, without which his rudimentary engine would have never been realized. As engine development proceeded into subsequent generations, future designers would be able to exploit various optimized engine subsystems, without necessarily possessing or needing direct understanding of why this systems were optimized and the laborious process necessary for previous designers to develop them. And then lets extend the process back to when metal was first refined and make the obvious observation that those people would not have had the slightest notion they were inventing technology absolutely essential for an internal combustion engine. So if you want to say the process was directed, it was directed by countless individuals, and processes and ongoing societal infracstructres, without any of these aforementioned things the engine would not have materialized. So why could not the materialization of organic things also have been "directed" by a huge multitide of precipitating factors. And if you want to say it would have to have the necessary magic of human-like intelligence, you're saying the human brain is more powerful than the entire universe.
We don’t know the exact mechanisms. But seeing the design exists in this physical world, I don’t see any reason to require anything other than physical mechanisms.
That's encouraging.
And I would say the point is this alleged gene duplication appears to be nothing more than an ad hoc narrative gloss, as the issues with gene duplication have been posted and JT seems to be ignoring.
Dawes himself in the original article wasn't questioning whether gene duplication could or did occur in this case - only its efficacy in explaining everything that could potentially be known about these genes. I did read whoisyourcreator say, "Gene duplication is just another fantasy that has never been proven to occur.", but I thought he misread the article which didn't claim that. It doesn't seem an outlandish idea to me. I haven't read everyone else's post in this thread yet, if some are claiming evidence that gene duplication doesn't actually happen.
2) agency, whatever that agency is but always something other than nature, operating freely.
To distiunguish design from nature is just a dead end, IMO. JT
Umm duplicated bicycle technology could not produce an internal combustion engine. Also the debate is directed vs. undirected processes.
However, what I.D. cannot do and therefore should not attempt to do, is to overthrow the conception of biological life resulting from physical mechanisms.
We don't know the exact mechanisms. But seeing the design exists in this physical world, I don't see any reason to require anything other than physical mechanisms. But again the only way we can tell is by studying the design in question.
If human design is a physical process, then it will be that much easier to equate to whatever(nonrandom) physical process is theorized to account for life.
Again if I brought my laptop to the Amazon Rain Forest and handed it to some natives- do you think they could tell me how it was designed and manufactured? And I would say the point is this allged gene duplication appears to be nothing more than an ad hoc narrative gloss, as the issues with gene duplication have been posted and JT seems to be ignoring. and JT:
And for I.D., intelligence is defined as something that is not a physical mechanism, or any other sort of mechanism for that matter.
"Intelligent" in ID refers to two things: 1) To differentiate between "apparent", ie illusory, design on one side and "optimal", ie perfect, design on the other and therefor 2) agency, whatever that agency is but always something other than nature, operating freely. Joseph
R. Dyes: In trying to deduce your implicit argument in the above article, let me give you the benefit of the doubt, that you're not saying you don't find it plausible that CRY-2 may have started life as CRY-1. Rather, what you're saying, is that this does not explain everything about CRY-2, e.g. how it acquired the functionality it has that CRY-1 does not. However, there doesn't appear to be any evidence you've presented, or in what I saw in the quick perusal of the original paper on the topic, that anyone is claiming that its a full explanation. Your implicit point is apparently that since no full explanation is provided, this is only further evidence that what has not been explained via a physical process yet cannot even potentially be explained that way. And in your mind this is further evidence that "intelligence" is necessary to account for the remainder. And for I.D., intelligence is defined as something that is not a physical mechanism, or any other sort of mechanism for that matter. Whenever any sort of physical process is at work and we don't know how it works we have to "black box" it. From our vantage point, it might as well be operating by magic as we are as of yet unable to explicate how it does what it does. So you can imagine some bizarre geothermal process for example, that no one knows how its operating for example. However in the case of most physical phenomena, there is an uncontroversial assumption that, irrespective of whether we know yet how its being produced, that there is in fact a potentially comprehensible mechanism to explain this phenomena. It is only in the domain of biological phenomena that controversy exists, presumbly because it involves our own origin. Even if there is severe inadequacy in our understanding of something, it is usually not the case that we know nothing about it. If what is described is only what is known, there is no valid conclusion to be made that the remainder can never be known, or can only be explained by a process that cannot actually be described (e.g. I.D.'s "Intelligence"). Here's the problem with saying that intelligence is not a mechanism: You are permenantly relegating it to the domain of the unknown. By arguing (from ignorance) that some as yet unexplained physical phenomena was most likely caused by "intelligence" you're saying its most likely not even possible for this phemonmena to ever be understood, by anyone. But let me be clear that explaining something via randomness is also relegating it to a permenantly unknown status. Really, randomness and "Intelligence" (as the vast majority in I.D. have defined it) are the same thing. But a deterministic physical process that explains how you got from a previous set of physical conditions to current physical conditions is not the same thing. JT
If human design is a physical process, then it will be that much easier to equate to whatever(nonrandom) physical process is theorized to account for life. JT
I don't know if the objection is that there might have been a process by which biological novelties emerged. In the analogy to human design, too rarely on the I.D. side is the process itself involved in human design actually examined. Another observation, if I.D's goal is the overthrow of randomness as an explanation it seems that they're slowly acheiving that, as there has been an increasing number of remarks both in this forum and elsewhere by evo-theorists distancing themselves from RM-NS by itself as an explanation. However, what I.D. cannot do and therefore should not attempt to do, is to overthrow the conception of biological life resulting from physical mechanisms. JT
”…If a factory for making bicycles were duplicated it would make bicycles, not motorcycles; that’s what is meant by the word duplication. " Suppose someone were studying airplanes and bicycles from say the early 1910's, and came to a conclusion, "It seems apparent that evolution of airplanes must have begun via a duplication of bicycle technology." This would be a valid observation, regardless that an explanation for the evolution of bicycles wasn't provided also. Come to think of it, motorcycles also would have started by a duplication of bicycle technology. Also, the formation of CRY-2 form CRY-1 would have entailed a loss of functionality, as CRY-2 does not have photo-receptor capability and CRY-1 does. I thought I.D. would be fine with loss of information at least. When it comes to supplying a plausible mechanism for how gene duplication and subsequent natural selection led to two distinctly functioning Cryptochromes and how these then integrated with other time-regulatory proteins in Monarch brains, there is a noticeable absence of detail As far as their integration with each other, that could be explained if one began as a copy of the other. If CRY-1 was also integrated with these other genes to begin with, then it would explain why CRY-2 was as well, if it started as a copy of CRY-1. It seems the following could have been the original paper on the subject (it wasn't in your sources): http://mbe.oxfordjournals.org/cgi/content/full/24/4/948 Thats the best I can do in 20 minutes this morning. JT
IMHO gene duplication would be an excellent mechanism pertaining to front-loading. IOW it is a designed feature, part of the genetic algorithm, used to get a targeted, ie pre-programmed, result. Dr Spetner has gene duplication as part of his "non-random evolutionary hypothesis"- see his book Not By Chance. The design inference comes from the fact that this newly duplicated (amplified) gene requires meta-information to activate, control and integrate its product into the existing system, ie combinatorial logic* (*see S Carroll's "Endless Forms Most Beautiful") Joseph
Wonderful post, and very fascinating subject. I would like to comment that the duplication of a piece of software program, to the purpose that the programmer may work on it and transform it according to his plans, is a very common step in computer programming (and in many other forms of design), and allows the designer to reuse the parts which can be kept in the new item. So, I have always considered gene duplication as a very likely signature of design. But obviously, as Behe points out, the duplication is only the first, rather passive step. After that, active intelligent work is needed to transform the duplicated issue and achieve the new function. Moreover, even the duplication itself must be intelligently designed: indeed, if all duplications are possible and happen, the probabilities for one random duplication to be recognized as potentially useful if and when other random events will change the duplicated piece are really virtually inexistent. Instead, an intelligent, guided duplication of potentially useful pieces of information compatible with further development can be the basis for designed evolution. gpuccio
Gene duplication is just another fantasy that has never been proven to occur. Here are the problems: There is NO scientific proof that gene duplication can create genes with more complex functions. Research papers reflect this admission by using words “most likely”: * “Duplicate gene evolution has most likely played a substantial role in both the rapid changes in organismal complexity apparent in deep evolutionary splits and the diversification of more closely related species. The rapid growth in the number of available genome sequences presents diverse opportunities to address important outstanding questions in duplicate gene evolution.” http://biology.plosjournals.org/perlserv/?request=get-document&doi=10.1371%2Fjournal.pbio.0020206&ct=1 An erroneous example cited is the claim that, over 100 million years ago, two genes of the yeast S. cerevisiae supposedly evolved from one gene of another specie of yeast (K. lactis). Refer to: http://www.nature.com/nature/journal/v449/n7163/abs/nature06151.html What is the evidence for their claim? Nothing but the presupposition that Darwinism is true so the very existence of two genes that total the same functions of the one gene is sufficient evidence that they evolved from each other: * ”The primary evidence that duplication has played a vital role in the evolution of new gene functions is the widespread existence of gene families.” http://biology.plosjournals.org/perlserv/?request=get-document&doi=10.1371% 2Fjournal.pbio.0020206&ct=1&SESSID=9999360a804131d0f0009da33ced0db9 Also, what Darwinists fail to present is a feasible step-by-step scenario how each gene could: - split their functions in a precise manner so that neither function would be disabled until ‘random chance’ completed the event; - become fixed in the population during each new step: * “A duplicated gene newly arisen in a single genome must overcome substantial hurdles before it can be observed in evolutionary comparisons. First, it must become fixed in the population, and second, it must be preserved over time. Population genetics tells us that for new alleles, fixation is a rare event, even for new mutations that confer an immediate selective advantage. Nevertheless, it has been estimated that one in a hundred genes is duplicated and fixed every million years (Lynch and Conery 2000), although it should be clear from the duplication mechanisms described above that it is highly unlikely that duplication rates are constant over time.” http://biology.plosjournals.org/perlserv/?request=get-document&doi=10.1371%2Fjournal.pbio.0020206&ct= 1&SESSID=9999360a804131d0f0009da33ced0db9 All genes need their own specialized molecular switch (G protein): Because of the split in function between the two genes, the molecular switch (G protein) must also be modified to coincide with the specific regulation needed to precisely regulate the new gene: * “G proteins are so called because they function as "molecular switches," alternating between an inactive guanosine diphosphate (GDP) and active guanosine triphosphate (GTP) bound state, ultimately going on to regulate downstream cell processes.” http://en.wikipedia.org/wiki/G_protein * “Moreover, in order for the organism to respond to an every-changing environment, intercellular signals must be transduced, amplified, and ultimately converted to the appropriate physiological response.” http://edrv.endojournals.org/cgi/content/full/24/6/765 See movie on G-proteins: http://www.youtube.com/watch?v=NB7YfAvez3o&feature=related From http://whoisyourcreator.com/gene_duplication.html whoisyourcreator
Great survey. Perfect-pitch rhetoric. Love to see more of this kind of thing on UD. allanius
Two points: 1- Gene duplication, in order to do something, also requires all the meta-information- a binding site, a promoter, an enhancer and a repressor. Otherwise all the gene duplication in the world will not do anything except add more DNA to the existing genome. 2- In his book "Why is a Fly Not a Horse?" Giuseppe Sermonti has a chapter (VIII) titled "I Can Only Tell You What You Already Know", which examines this very thing- how do organisms "know" to migrate and to where?
An experiment was conducted on birds-blackcaps, in this case. These are diurnal Silviidae that become nocturnal at migration time. When the moment for the departure comes, they become agitated and must take off and fly in a south-south-westerly direction. In the experiment, individuals were raised in isolation from the time of hatching. In September or October the sky was revealed to them for the first time. Up there in speldid array were stars of Cassiopeia, of Lyra (with Vega) and Cygnus (with Deneb). The blacktops became agitated and, without hesitation, set off flying south-south-west. If the stars became hidden, the blackcaps calmed down and lost their impatience to fly off in the direction characteristic of their species. The experiment was repeated in the Spring, with the new season’s stars, and the blackcaps left in the opposite direction- north-north-east! Were they then acquainted with the heavens when no one had taught them?
The experiment was repeated in a planetarium, under an artificial sky, with the same results! The bottom-line is there is much more going on than just chemical reactions caused by genetic material. But that reduction is all the evolutionists have and I say it hampers investigations by preventing us from seeking answers outside of the genome. Joseph

Leave a Reply