Uncommon Descent Serving The Intelligent Design Community

Questioning The Role Of Gene Duplication-Based Evolution In Monarch Migration

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Each year about 100 million Monarch butterflies from Canada and northeastern United States make their journey to the Mexican Sierra Madre mountains in an astonishing two-month long migration (Ref 1).  They fly 2500 miles to a remote area that is only 60 square miles in size (Ref 1).  No one fully understands what triggers this mass movement of Lepidopterans.  But there is no getting away from the fact that this is a phenomenon that, as one review summed up, “staggers the mind”, especially when one considers that these butterflies are freshly-hatched (Ref 1).  In short, Monarch migrants are always “on their maiden voyage” (Ref 2).  The location they fly to is home to a forest of broad-trunked trees that effectively retain warmth and keep out rain- factors that are essential for the Monarchs’ survival (Ref 1).
 
With a four-inch wingspan and a weight of less than 1/5th of an ounce, it is remarkable that the Monarchs survive the odyssey (Ref 1).  Making frequent stops for nectar and water, they fly approximately 50 miles a day avoiding all manner of predator.  Rapidly shifting winds over the great lakes and scorching desert temperatures in the southern states provide formidable obstacles (Ref 1).  Nevertheless the Monarchs’ finely-tuned sense of direction gets most of them across.
 
It was not until 1975 that scientists first uncovered the full extent of the Monarch’s migration (Ref 1).  What has become clear since then is that only Monarchs travel such distances to avoid the “certain death of a cold winter”.   According to University of Toronto zoologist David Gibo, soaring is the key to making it to Mexico (Ref 1). Indeed flapping wings is about the most energy inefficient way of getting anywhere.  Other aspects of the Monarch’s migration-linked behaviors, such as the reproductive diapause that halts energy-draining reproductive activity during its journey, continue to fascinate scientists worldwide (Ref 2).  Both diapause and the 6-month longevity characteristic of Monarchs are caused by decreased levels of Juvenile Hormone which is itself regulated by four genes (Ref 2).
 
Exactly how Monarchs navigate so precisely to such a specific location is a subject of intense debate.  One theory suggests that they respond to the sun’s location, another that they are somehow sensitive to the earth’s magnetic field (Ref 1).  Recent molecular studies have shown that Monarchs have specialized cells in their brains that regulate their daily ‘clock’ and help keep them on course (Ref 3).  Biologist Chip Taylor from the University of Kansas has done some remarkable tagging experiments demonstrating that even if Monarchs are moved to different locations during the course of their journey south, they are still able to re-orient themselves and continue onwards to their final destination (Ref 1). 
 
A study headed by Stephen Rappert at the University of Massachusetts has elucidated much of the biological basis of the timing-component of Monarch migration (Ref 3).  Through a process better known as time-compensated sun compass orientation, proteins with names such as Period, Timeless, Cryptochrome 1 and Cryptochrome 2 provide Monarchs with a well-regulated light responsiveness during both day and night (Ref 3).  While Cryptochrome 1 is a photoreceptor that responds specifically to blue light, Cryptochrome 2 is a repressor of transcription, efficiently regulating the period and timeless genes during the course of a 24-hour light cycle (Ref 3).  Investigations using Monarch heads have not only provided exquisite detail of the daily, light-dependent oscillations in the amounts of these proteins but have also revealed a ‘complex relationship’ of molecular happenings. 
 
Indeed, the activities of both Cryptochrome 2 and Timeless are intertwined with at least two other timing proteins called ‘Clock’ and ‘Cycle’ (Ref 3).  Preliminary results suggest that Period, Timeless and Cryptochrome 2 form a large protein complex, with Cryptochrome 2 being a repressor of Clock and Cycle transcription.  Cryptochrome 2 is also intimately involved with an area of the Monarch’s brain called the Central Cortex that likely houses the light-dependent ‘sun compass’, so critical for accurate navigation (Ref 3).
 
Rappert’s team have speculated that the Monarch’s dual Cryptochrome light response system evolved into the single Cryptochrome systems found in other insects through a hypothetical gene loss event (Ref 3).  Furthermore they have suggested that the dual Cryptochrome system itself arose through a duplication of an ancestral gene (Ref 3).  Biologist Christopher Wills wrote of gene duplication as a ‘rare occurrence’ in which “an extra copy of a gene gets placed elsewhere in the genome” (Ref 4, p.95).  Seen from an evolutionary perspective, these two gene copies are then “free to evolve separately…shaped by selection and chance to take on different tasks” (Ref 4, p.95).
 
While experiments have shown that transgenic Monarch Cryptochrome 1 can rescue Cryptochrome deficiency in other insects such as fruit flies, what still remains elusive is how exactly gene duplication could have lead to two proteins with such widely-differing functions as those found in the two Monarch Cryptochromes.  Indeed biochemist Michael Behe has been instrumental in revealing the explanatory insufficiencies of terms such as gene duplication and genetic shuffling within the context of molecular evolution.  As Behe expounded:
 
“The hypothesis of gene duplication and shuffling says nothing about how any particular protein or protein system was first produced- whether slowly or suddenly, or whether by natural selection or some other mechanism….. In order to say that a system developed gradually by a Darwinian mechanism a person must show that the function of the system could “have formed by numerous, successive slight modifications”…If a factory for making bicycles were duplicated it would make bicycles, not motorcycles; that’s what is meant by the word duplication.  A gene for a protein might be duplicated by a random mutation, but it does not just “happen” to also have sophisticated new properties” (Ref 5, pp.90, 94).
 
When it comes to supplying a plausible mechanism for how gene duplication and subsequent natural selection led to two distinctly functioning Cryptochromes and how these then integrated with other time-regulatory proteins in Monarch brains, there is a noticeable absence of detail.  Each successive slight modification of a duplicated gene would have had to confer an advantage, for selection and chance to get anywhere.  Furthermore the newly duplicated Cryptochrome would have had to have become successfully incorporated into a novel scheme of daylight processing for migration patterns to begin. 
 
Evolutionary biology must move beyond its hand-waving generalizations if it is to truly gain the title of a rigorous scientific discipline.  In the meantime, protein systems such as the Monarch’s Cryptochromes will continue to challenge what we claim to know about evolutionary origins.
     
References
1. NOVA: The Incredible Journey Of The Butterflies, Aired on PBS on the 27th January, 2009, See http://www.pbs.org/wgbh/nova/butterflies/program.html
 
2. Haisun Zhu, Amy Casselman, Steven M. Reppert (2008), Chasing Migration Genes: A Brain Expressed SequenceTag Resource for Summer and Migratory Monarch Butterflies (Danaus plexippus), PLoS One, Volume 3 (1), p. e1345
 
3. Haisun Zhu, Ivo Sauman, Quan Yuan, Amy Casselman, Myai Emery-Le, Patrick Emery, Steven M. Reppert (2008), Cryptochromes Define a Novel Circadian Clock Mechanism in Monarch Butterflies That May Underlie Sun Compass Navigation, PLoS Biology, Volume 6 (1), pp. 0138-0155
 
4. Christomper Wills (1991), Exons, Introns & Talking Genes: The Science Behind The Human Genome Project, Oxford University Press, Oxford UK
 
5. Michael Behe (1996), Darwin’s Black Box, The Biochemical Challenge To Evolution,  A Touchstone Book Published By Simon & Schuster, New York

 

Copyright (c) Robert Deyes, 2009

Comments
Rob: First, do you understand computer architecture, classically "the machine language's view of the system"? A computer has nothing resembling foresight, as Tim has just exemplified. It is simply a machine that mechanically processes programmed instructions at bit level based on arrangements of logic gates and registers and clock and control signals, to give controlled predictable outputs. (Do you understand finite state machine algebra or register transfer algebra, or just plain old Boolean Algebra, gates and RS flip-flops and their extensions as D f/fs, JK f/fs, registers, counters, clocks, etc? These are what drive understanding what a PC does, how.) ALL the smarts in a computer is in the design put into its hardware and its programs and data structures. If you do not understand something that basic, sorry, but you are in no position to seriously discuss the issues you have raised. Instead you need to do some 101 level reading. As to the opposition between necessity and contingency, you continue to fail to understand that there are two very sharply distinct ways to be contingent, directed and undirected; the latter of which is directly familiar to every reasonably intelligent human being from how s/he interacts with the world. So, you are falling into self referential inconsistencies and selective hyperskepticism, even as you set out on creating a contextually responsive digital text string in English. Chance and design are quite distinct in ID inference contexts [and in engineering and in programming and in statistics and in management and in a lot of the rest of what people do in serious contexts in the world], and if you would take time to simply examine the repeatedly given die example you would see so through a concrete example. The explanatory filter (as adjusted to explicitly address aspects) is another way to look at it, from the view of the analysis of phenomena or objects by aspects. For this blog, we have in the glossary provided a description as follows:
Chance – undirected contingency. That is, events that come from a cluster of possible outcomes, but for which there is no decisive evidence that they are directed; especially where sampled or observed outcomes follow mathematical distributions tied to statistical models of randomness. (E.g. which side of a fair die is uppermost on tossing and tumbling then settling.) Contingency – here, possible outcomes that (by contrast with those of necessity) may vary significantly from case to case under reasonably similar initial conditions. (E.g. which side of a die is uppermost, whether it has been loaded or not, upon tossing, tumbling and settling.). Contingent [as opposed to necessary] beings begin to exist (and so are caused), need not exist in all possible worlds, and may/do go out of existence. Necessity — here, events that are triggered and controlled by mechanical forces that (together with initial conditions) reliably lead to given – sometimes simple (an unsupported heavy object falls) but also perhaps complicated — outcomes. (Newtonian dynamics is the classical model of such necessity.) In some cases, sensitive dependence on [or, “to”] initial conditions may leads to unpredictability of outcomes, due to cumulative amplification of the effects of noise or small, random/ accidental differences between initial and intervening conditions, or simply inevitable rounding errors in calculation. This is called “chaos.” Design — purposefully directed contingency. That is, the intelligent, creative manipulation of possible outcomes (and usually of objects, forces, materials, processes and trends) towards goals. (E.g. 1: writing a meaningful sentence or a functional computer program. E.g. 2: loading of a die to produce biased, often advantageous, outcomes. E.g. 3: the creation of a complex object such as a statue, or a stone arrow-head, or a computer, or a pocket knife.)
You may say what you want by way of selectivley hyperskeptical and self referentially inconsistent objections about "fuzzy concepts"; but here you have a specific cluster of definitions with examples. Kindly tell us in what way these are so fuzzy -- in the context of ourselves as rational, learning, purposeful, designing animals living in communities and civilisations that depend on design for technology -- that they cannot be empirically recognised and differentiated as distinct. (If you object, your objections must not fall into selective hyperskepticism [which is inherently self-referentially inconsistent], e.g. a good part of the definition of design above is actually base don the classical definition of what the profession of engineering is about. Similarly, that of necessity is very close to what a description of dynamics and differential or difference equation models is about -- the framework that Laplace had in mind when he talked about his demon; the classical modern model of determinism.) GEM of TKIkairosfocus
March 6, 2009
March
03
Mar
6
06
2009
02:08 PM
2
02
08
PM
PDT
Tim:
Here’s foresight. On board one, two humans play each other and trade down to king chasing king. Ok, these guys aren’t the greatest chess players, but they eventually get some foresight “Uh, this is never going to end,” and agree to a draw. On board two, two computer programs trade down and chase until their batteries run out. No, they haven’t the foresight to offer a draw unless it’s been programmed in, and then, well then it is experience, not foresight. This could be made more rigorous,
Tim, that's a great illustration, and it actually has been made more rigorous in computing theory -- it's the halting problem. It's not hard to prove that no computer can look at any board of any game (not just standard chess) and determine whether rational play will result in a never-ending game. But it's also pretty clear that humans don't have that ability either. And given that an appropriately programmed computer can predict the consequences of chess moves better than any human can, it's not unreasonable to think that an appropriately programmed computer can predict, better than humans, whether games will go on forever.R0b
March 6, 2009
March
03
Mar
6
06
2009
02:06 PM
2
02
06
PM
PDT
"Yes, I know that computers’ capabilities are bestowed by programmers. The relevant fact is that an appropriately programmed computer has the capability of foresight." --ROb I disagree. Computers, even appropriately programmed computers, are NOT capable of foresight. Although computers may demonstrate outputs commensurate with what would be called foresight in their human programmers, that does not mean that the computer is capable of foresight. A simple example: in a game of chess a very modest computer program may be able to weigh several future "states" of the board and after comparing the "value" of each, choose the best and push a pawn as opposed to knight. The player opposite may comment, "This program was able to see ahead and determine that moving the knight was a poor choice because I would have . . . " Thus, it is very tempting to say that the computer had foresight. The fact of the matter, though, is that the computer did nothing more than manipulate a ton of this: 101101001010100110111110001010101010110101011010101011010101000001 into some of this 10101101000101010100000111111111110 based on our experience of the game of chess. That's not computer foresight; it's number crunching. Here's foresight. On board one, two humans play each other and trade down to king chasing king. Ok, these guys aren't the greatest chess players, but they eventually get some foresight "Uh, this is never going to end," and agree to a draw. On board two, two computer programs trade down and chase until their batteries run out. No, they haven't the foresight to offer a draw unless it's been programmed in, and then, well then it is experience, not foresight. This could be made more rigorous, but I think it hints at a reason you are having such difficulty with the directed/random/law definitions. This is why I would also disagree with the idea "that computers are capable of directed contingency". More on that later. Jerry at 108, nicely put. The introduction of agency . . . ROb, you seem to want agency to be reducible to law or chance or to some category that is tied to them, but what if intelligence is not reducible in mechanistic way? I may have misstated your case, so nevermind that, but I do find that looking at agency in this way is actually a lot more helpful and for me, a lot more comprehensible.Tim
March 6, 2009
March
03
Mar
6
06
2009
01:30 PM
1
01
30
PM
PDT
Okay, I don't have time to carefully read, much less respond to everyone's comments, so I'll try to summarize a few points: - A common point of criticism by the scientific community is ID's vagueness (cf "written in jello"). IMO, the definitions offered in the glossary lend weight to that criticism. They're great as a basis for endless philosophical debates, but they don't work as a basis for a scientific theory. The scientific community is waiting for a technical treatment of ID theory, not hand-holding explanations by way of examples and Webster definitions. - If JT recognizes that not all contingency is intelligence, then he should not have used the word "equates". The UD denizens are certainly correct on that. It is also a fact that "A is not equal to B" is not the same as "A is the complement of B". Characterizing JT's position on randomness and law as the former rather than the latter is unfair. If we recognize his position as the latter, then his blunder is that he should have said "entails" rather than "equates". By focusing on this blunder, we miss the meat of the issue, which is that ID seems to portray intelligence as random. - Computers operate according to physical laws. Yes, the state transitions depend on the current state, which includes the software stored on the system and the configuration of the hardware. And yes, software and hardware are typically designed by intelligent humans. But humans are not part of the computer system, so the mental process of designing hardware and software is separate from the process of program execution. The latter uncontroversially operates according to the laws of physics. I think that any disagreement on this issue is purely semantic. - If "contingency", "randomness", "non-determinism", and "chance" are distinct in ID theory, then ID should spell out that distinction. Atom seems to equate non-determinism with contingency, and randomness with undirected contingency. Is that how the ID camp in general uses those terms? Is there anywhere in the voluminous probability literature where I can read about this thing called "directed contingency" that is neither deterministic nor random? - ID needs to spell out the distinction between "directed" and "undirected" in a scientific fashion, rather than a Webster definition. If a computer program uses data to predict the consequences of various courses of action, and then takes the course of action with the most favorable predicted consequence, does that count as "directed"? How about "intelligent"? Hopefully I'll have more time later.R0b
March 6, 2009
March
03
Mar
6
06
2009
01:16 PM
1
01
16
PM
PDT
ROb, "intelligence = randomness". What is clear is that you are not random er ...bFast
March 6, 2009
March
03
Mar
6
06
2009
01:05 PM
1
01
05
PM
PDT
kairosfocus, I can only scratch the surface of your lengthy posts. I wish I had time to address them exhaustively, but I don't.
But, Rob and JT, randomness is NOT to be defined as or seen as non-determinism.
Says who? I'm happy to use ID's definitions of randomness and non-determinism if you'll tell me what they are.
As has been repeatedly stated — but just as repeatedly ignored: mechanical forces will give rise to natural regularities, but there are CONTINGENT situations where under remarkably similar initial conditions, quite diverse outcomes are possible.
Who ignored that fact? Your description of "CONTINGENT situations" is exactly that of non-deterministic processes (although it might also describe deterministic processes that are chaotic). Is contingency synonymous with non-determinism?
Nor is this a strange or unexpected definition [of intelligence]: it is immediately recognisable from our experience and observation of our fellow, rational and moral animals. Why, then is there now a pursuit of infinite regress by demanding “definition” of “directedness” vs “undirectedness”?
Because I want to know what ID means by "directed". If "directed contingency" is a key term in ID theory, then ID needs to define it. And pointing to a Webster or Wikipedia definition only reinforces the perception that ID is not a scientific theory. Also, the above definition doesn't tell us whether "ID says that “intelligence” is not reducible to law, matter and energy." So let's resolve that right now. Does ID say that or not?
Directedness is a subset of contingency, as has both been stated and exemplified.
Yes, I know, and I stated as much when I said that intelligence is also characterized by directedness. The question is whether directedness is related to or independent of determinacy.
Computer programs are capable of no more foresight than was written into them by their programmers. Stochastic inputs do not change that, they simply give rise to patterns based on the stochastic inputs, e.g Monte Carlo simulations.
Yes, I know that computers' capabilities are bestowed by programmers. The relevant fact is that an appropriately programmed computer has the capability of foresight. The output of a program can also be contingent. So it seems that computers are capable of directed contingency. Is that correct or not?
Your favourite rhetorical assertion that ID is assuming what it should not is exposed by the simple exercise of tossing dice, one loaded, one unloaded.
If you're talking about my point that ID assumes the irreducibility of intelligence/design, I get that straight from ID proponents. Would you like me to provide some quotes?
Can you tell the difference?
Of course. As I said before, the concept of determinacy if pretty well defined. If a die is loaded so it always lands on the same side, then throwing that die is a deterministic process. Throwing an unloaded die, on the other hand, is a classic illustration of non-determinism. Is non-determinism the same as contingency? If so, then your claim (is this ID's claim?) is that intelligence is non-deterministic. Is ID anti-compatibilist?
repeatedly been pointed out. e.g. at 96 by bFast: A is not equal to B,
As I've already pointed out, this doesn't fully state JT's position. Not only is randomness not equal to law/determinism, but the two are also complements. If intelligence is also the complement of law, then it follows that intelligence = randomness. If intelligence is only a subset of the complement of law, then it follows that intelligence entails randomness, but not necessarily vice-versa. If JT acknowledges the directed/undirected distinction as meaningful, then "equates" was not the right word. But do you agree that intelligence entails randomness?R0b
March 6, 2009
March
03
Mar
6
06
2009
11:24 AM
11
11
24
AM
PDT
PS: Jerry, it is usually more useful to build up to metaphysics in light of empirical observations, per the comparative difficulties criterion of factual adequacy. In this case [cf the just above], we observe chance and design in action. Let the metaphysical chips lie where they fly, having made that observation.kairosfocus
March 6, 2009
March
03
Mar
6
06
2009
10:50 AM
10
10
50
AM
PDT
CJY, 106:
Why would Kairosfocus disagree when he helped collect the definitions within the glossary of terms? Which definitions do you have a problem with and why? However, in conclusion, it seems that you don’t want examples because all the examples help bolster the case for ID and you have no counter examples. It seems that you wish to nitpick the definitions, which of course is not a problem if you have genuine concerns. Examples are extremely important in any science as it is the examples — observations — which either back up the science in question or falsify it.
Pree-zactly! Excellent. And the examples start at two repeatedly dropped, tumbling and settling dice. One fir, the other loaded: [1] Dropping on being let go -- natural regularity tracing to mechanical force. [2] tumbling to one of several possible outcomes under essentially similar initial conditions: contingency. [3] Die A ettling to one of 6 outcomes with odds about 1 in 6: chance. [4] Die B NOT settling to that pattern but to say having 6 uppermost 1/2 the time. Design, i.e directed contingency. Rob and JT can you see the differences? Why or why not? GEM of TKIkairosfocus
March 6, 2009
March
03
Mar
6
06
2009
10:47 AM
10
10
47
AM
PDT
In the process of trying to find what I have said in the past, I came across this comment I made about 3 years ago. Which just goes to show us that there are few new discussions here. This is what I said about chance, law and intelligence three years ago "Anytime the discussion gets philosophical, I get uneasy because the terms used are very general and that to me means vague. I am sure they have precision but the precision is not in the lexicon we commonly use. Given that, I have a few comments. Is there really such a thing as chance? Or when we say chance do we mean that we do not understand all the forces or complexities that underline a situation and what appears or happens because of our insufficient knowledge is then described as random or by chance. If all the laws are working and there is nothing but atoms, quarks, mesons, etc, are not the existence, characteristics and motion of each really determined by some basic physical laws. Now I understand that there is something called the uncertainty principle and that Quantum Mechanics causes some unusual results but does this mean that some of these particles are not following some basic set of laws. I understand that we may not know just what some of these laws and forces might be but that does not mean these basic laws are not operating and causing these particles to behave in a specific manner. I also understand that the complexity of the particles and forces involved may be beyond any calculations we are capable of but that also does not mean that basic laws are not behind everything. So is what we mean by chance really just a subset of law? Isn’t the term “agency” just the introduction of free will of some intelligence into the equation? This intelligence exerts some new force into the physical world, thus moving the basic particles either here or there depending upon the nature of the force created by the intelligence. In other words the forces that would ordinarily be operating based on laws are modified somewhat by a new force caused by a freely thought out action of an intelligence. If it is not freely thought out then we have to assume the “so called” intelligence is determined by some other laws we may not be aware of. I know that chance has been discussed in detail elsewhere on this blog but I am just trying to put the framework offered here in this post into some other framework that I can understand and possibly discuss with a typical person who doesn’t have a background in philosophy. And to also understand it better myself. In other words we have laws and then we have free will. And I understand that philosophers have been discussing this topic for a few thousand years with no consensus."jerry
March 6, 2009
March
03
Mar
6
06
2009
10:32 AM
10
10
32
AM
PDT
ROb: "It should be clear by now that JT sees law and randomness as complements, which is logically the case if law=determinism and randomness=non-determinism. So if intelligence doesn’t fall in the category of law, then it must fall in the category of randomness." Didn't I already deal with this logical fallacy when I pointed out, as everyone already knows ... "A is not equal to B C is not equal to B Therefore, A = C" ... is completely illogical. Figure it out already!!!! As Atom said above, move along now, nothing more to see here ...CJYman
March 6, 2009
March
03
Mar
6
06
2009
10:24 AM
10
10
24
AM
PDT
Pattern described by low contingency = pattern defined by a mathematical description of regularity (law). Pattern described by high contingency = pattern not defined by mathematical description of regularity. Pseudo random generators do not generate true contingency since there are regularities inherent in the outputs. They are described by chance + law. There is a consistent regularity in the output, although it appears random in the short term. True randomness is an example of high contingency since consistent regularities are not found within background noise. True randomness is described by chance even though it is the output of a conglomerate of a chaotic assembly of laws. There is no consistent regularity in the output. Check out random.org -- especially the write up re: bitmap images and the difference between true randomness and pseudo-randomness. It is a wealth of information in relation to this topic. Either way, though, neither pseudo randomness nor true randomness produce the same type of patterns that are the result of the modeling of future possibilities, target generation, and harnessing law and chance to produce those targets. This foresight produces functionally specified patterns in which the info content uses up all probabilistic resources, whether this foresight is artificial or conscious. Tell me, do you possess foresight and do you use your foresight to produce these comments of yours? Would chance and law *absent your foresight* be a good explanation for these comments that appear to come from a foresighted agent with the handle "ROb?" ROb: "I think that solid definitions would be more helpful to ID’s case than examples. (Kairosfocus might disagree.)" Why would Kairosfocus disagree when he helped collect the definitions within the glossary of terms? Which definitions do you have a problem with and why? However, in conclusion, it seems that you don't want examples because all the examples help bolster the case for ID and you have no counter examples. It seems that you wish to nitpick the definitions, which of course is not a problem if you have genuine concerns. Examples are extremely important in any science as it is the examples -- observations -- which either back up the science in question or falsify it.CJYman
March 6, 2009
March
03
Mar
6
06
2009
10:18 AM
10
10
18
AM
PDT
R0b wrote:
It should be clear by now that JT sees law and randomness as complements, which is logically the case if law=determinism and randomness=non-determinism. So if intelligence doesn’t fall in the category of law, then it must fall in the category of randomness.
Sorry if JT has problems with classification and therefore is making false dichotomies, but don't blame us if we don't follow his example. If Law is Determinism, then the complement is Non-Determinism, not "randomness". This is simple. Now JT wants to take the extra step and claim (implicitly): "All non-determinism is randomness". Excuse me!? Sorry if we don't hold the same metaphysical belief he does. Randomness is one type of non-determinism; Agency is another. One is goal-directed, the other is not. Very simple. Just admit it, JT made a blunder and no amount of smoke at this point is going to cover that up.Just man up and move on. Or you can keep trying to convince us that all non-determinism is equal to randomness. (I hope you have some way of demonstrating this claim...) AtomAtom
March 6, 2009
March
03
Mar
6
06
2009
10:09 AM
10
10
09
AM
PDT
Lets consider this by example.
I appreciate that, but I think that solid definitions would be more helpful to ID's case than examples. (Kairosfocus might disagree.)R0b
March 6, 2009
March
03
Mar
6
06
2009
09:54 AM
9
09
54
AM
PDT
Tim:
Both the novel and the physical state of the computer changed, but they did not evolve except according to the novelist’s will and the initial information that was encoded, and I think we know how that type of evolution matches with Neo-Darwinian evolution. . . er, not very well.
When I talked about the system state evolving, I wasn't making any allusion to biological evolution. Sorry if that was confusing.R0b
March 6, 2009
March
03
Mar
6
06
2009
09:46 AM
9
09
46
AM
PDT
Tim:
“If “foresight” makes a process “directed”, then computers are apparently capable of directed contigency, as long they have a random number generator that meets ID’s definition of contingency.” And I wrote this: “If bald foreheads make a man sexy, then computers are apparently sexy as long as they have a randomly chosen surface that is shiny and hairless thus somehow matching someone’s definition of bald.”
If your statement is analogous to mine, then apparently a random number generator (even one that is QM-based) does not meet ID's definition of contingency. I'll take that as a data point in my never-ending search for ID's definition of contingency.R0b
March 6, 2009
March
03
Mar
6
06
2009
09:31 AM
9
09
31
AM
PDT
JT and ROb, What about those laws? I ask because they too are evidence for ID and a designer.Joseph
March 6, 2009
March
03
Mar
6
06
2009
04:26 AM
4
04
26
AM
PDT
PS: In my always linked, I discuss the issue of the origin of functionally specific complex information, in the context of lucky noise vs mind. In so doing, I already pointed out that there is a significant threshold of complexity [i.e no of bits so config space scale at 2^n] that has to be crossed -- per chance + necessity only -- by random generation of patterns before selection processes can acto on differential functionality to hill-climb to optimality. In short you have to get to the beach of an island in the ocean of possibilities before you can think about climbing to its mountain tops of peak performance. In discussing this, I have found it important to raise an issue on the link between views on the origin of mind and the implications for reliability; that evo mat advocates will doubtless find challenging or even painful, but I think we need to soberly think it through, especially given what has run on above: _____________ . . . [evolutionary] materialism [a worldview that often likes to wear the mantle of "science"] . . . argues that the cosmos is the product of chance interactions of matter and energy, within the constraint of the laws of nature. Therefore, all phenomena in the universe, without residue, are determined by the working of purposeless laws acting on material objects, under the direct or indirect control of chance. But human thought, clearly a phenomenon in the universe, must now fit into this picture. Thus, what we subjectively experience as "thoughts" and "conclusions" [as well as "purposes," "goals," "plans" and "designs"}can only be understood materialistically as unintended by-products of the natural forces which cause and control the electro-chemical events going on in neural networks in our brains. (These forces are viewed as ultimately physical, but are taken to be partly mediated through a complex pattern of genetic inheritance ["nature"] and psycho-social conditioning ["nurture"], within the framework of human culture [i.e. socio-cultural conditioning and resulting/associated relativism].) Therefore, if materialism is true, the "thoughts" we have and the "conclusions" we reach, without residue, are produced and controlled by forces that are irrelevant to purpose, truth, or validity. Of course, the conclusions of such arguments may still happen to be true, by lucky coincidence — but we have no rational grounds for relying on the “reasoning” that has led us to feel that we have “proved” them. And, if our materialist friends then say: “But, we can always apply scientific tests, through observation, experiment and measurement,” then we must note that to demonstrate that such tests provide empirical support to their theories requires the use of the very process of reasoning which they have discredited! Thus, evolutionary materialism reduces reason itself to the status of illusion. But, immediately, that includes “Materialism.” For instance, Marxists commonly deride opponents for their “bourgeois class conditioning” — but what of the effect of their own class origins? Freudians frequently dismiss qualms about their loosening of moral restraints by alluding to the impact of strict potty training on their “up-tight” critics — but doesn’t this cut both ways? And, should we not simply ask a Behaviourist whether s/he is simply another operantly conditioned rat trapped in the cosmic maze? In the end, materialism is based on self-defeating logic . . . . In Law, Government, and Public Policy, the same bitter seed has shot up the idea that "Right" and "Wrong" are simply arbitrary social conventions. This has often led to the adoption of hypocritical, inconsistent, futile and self-destructive public policies . . . . In short, ideas sprout roots, shoot up into all aspects of life, and have consequences in the real world . . . __________________ Okay, onlookers, is this what is really going on under the surface of the abocve? Why or why not?kairosfocus
March 6, 2009
March
03
Mar
6
06
2009
01:16 AM
1
01
16
AM
PDT
6] Rob, 97: If intelligence, according to ID, isn’t reducible to law, and if the term “law” indicates that a process is deterministic, which seems a reasonable interpretation, then intelligence must be non-deterministic. Other terms that are commonly used interchangeably with “non-deterministic” are “stochastic” and “random”. Onlookers, we experience and observe every day, routinely, that there is another form of highly contingent process: design, premised on intelligence. And, we have repeartedly pointed out that the proper opposition is: low/no contingency vs high contingency,t he latter being in some cases undirected, in otehrs directed. So, to insist like Rob has done in the above clip -- sad to say -- is to willfully set up and knock over a strawman. Let us agasin get a testimony against interest from Wikipedia:
Design is used both as a noun and a verb. The term is often tied to the various applied arts and engineering (See design disciplines below). As a verb, "to design" refers to the process of originating and developing a plan for a product, structure, system, or component with intention[1]. As a noun, "a design" is used for either the final (solution) plan (e.g. proposal, drawing, model, description) or the result of implementing that plan in the form of the final product of a design process[2].
Plainly, we are not being idiosyncratic; as even notoriously anti-ID Wikipedia has had to acknowledge here. The saddest thing about this, is that in other contexts where we will not be heard in our own voice, such strawmen will be presented as what we think, adn will be taken as gospel truth. 7] If “foresight” makes a process “directed”, then computers are apparently capable of directed contigency, as long they have a random number generator that meets ID’s definition of contingency. Again, Rob, there are two distinct varieties of the contingent: directed and undirected. You have -- predictably -- substituted undirected for directed. Also, programs have no more insight or foresight than was written into them by their programmers in the algorithms and data input from the situation they arte applied to. GIGO -- garbage in, garbage out. 8] When a non-interactive program is executing, the physical state of the system evolves according to the laws of physics. This is true regardless of how the system got into the state that included a loaded program that was starting to run, which means that it’s independent of the question of who or what designed and programmed it. This is what we mean when we say that computers operate according to law, or law and chance. The fundamental cycle of programming is: input, process, output. Whether the inputs were stored in input data structures memory or no, or even written into the code being executed, inputs there are. Similarly, processing is based on the inputted design of the algorithm, and reflects its assumptions about the world. Thirdly, the laws of nature and the laws of mathematics etc constrain the design of a processor and its functioning, they do not determine it; cf the architectures of a 4004 of 1971 with a 68020 of 1993 or at Pentium whatever level, etc. So, processor behaviour as that of a physical hardware object carrying out a soft-ware program based on microcode [or even hard-wired instruction execution] is NOT independent of the intelligent inputs of the designers involved. Engineering uses the forces and materials of nature to intelligently and economically achieve goals by creating designed structures and processes, hopefully for the benefit of humanity. And, as the growing body of patents discloses, this is highly contingent, creative tot he point of allowing intellectual property rights, and non-random. In short pardon, your selective, self-referentially incoherent, hyperskepticism, is showing. 9] if anyone thinks that JT’s or my usage of ID terms is unreasonable, then they should work on coming up with definitions that don’t raise more questions than they answer, and then using the terms consistently. Onlookers, all of this is in a context where there is a whole vocabulary discussed above in a glossary. As for one definition leading to further and further questions, the first underlying rhetorical issue is that we are looking here at a refusal to seriously interact with how we have identified concepts by reference to key case studies, which allows us to escape the infinite regress of inferences without resort to circularity. Rob, go get those two dice and play with them for a while, then come back and tell us what you learned. (E.g. Why not go to Las Vegas and try to play a dice game with loaded dice and see what happens? Tell us why, in light of say JT's equating of undirected and directed contingency. [Of course, this last one is strictly a thought exercise! We don't want Rob to go to gaol as a cheat.]) The second rhetorical tactic is to pretend that we are using words in idiosyncratic and confusing ways. In fact, as the very glossary testifies, we are using terms in quite common and standard ways. Ways that even Wikipedia with an interest against ID is forced to acknowledge as legitimate. So, we have a reasonable expectation that informed readers such as Rob and JT will recognise those ways, especially since we have given concrete examples that can be carried out as experiments -- e.g. fair vs loaded dice. Thirdly, there is a turnabout false accusation involved. For, it is JT (backed up by Rob) who is using plainly idiosyncratic "definitions," when he attempts to equate chance and design. By sharpest contrast, ever since the days of Plato, we have recognised the distinction, and it continues to be useful today, as even Wikipedia has to acknowledge. indeed, in law, we recognise that sufficiently innovative designs are intellectual property. Also, we know that where such designs involve irreducibly complex systems and/or functionally specific complex information in excess of say 1,000 bits of capacity, no reasonable random search will get to the islands of function int eh config space, on basic probability calculations. Nor is there good reason to believe that teh substance of Microsoft Office 2007 is written into the laws of the universe, even if there is a blend of chance and necessity at work. Nor did Mr Gates hire a zoo full of monkeys to create it by pounding keyboards at random. nor did he set up a random search and select for function process as his primary design tool. All of this is perfectly patently obvious. _________________ Onlookers, the reduction to absurdity implicit in evolutionary materialism is becoming ever more painfully plain in this thread. Sadly so. Rob and JT, surely, you can do better than this! GEM of TKIkairosfocus
March 6, 2009
March
03
Mar
6
06
2009
01:03 AM
1
01
03
AM
PDT
Ah, boy . . . One could not make up the last few dozen exchanges in this thread! Priceless, but in another sense, ever so sadly revealing on what is going on with evolutionary materialist thought, which is plainly now at the point where the bankruptcy is obvious to all who will but look. (However, proverbially, there are those who claim to be sighted but . .. ) On a few points: 1] Rob, 97: JT sees law and randomness as complements, which is logically the case if law=determinism and randomness=non-determinism. So if intelligence doesn’t fall in the category of law, then it must fall in the category of randomness. But, Rob and JT, randomness is NOT to be defined as or seen as non-determinism. that is an artifact of the evolutionary materialist view imposed a priori, a la Lewontin. And, it is a point where the factual adequacy gap between what evo mat permits and what we observe and experience is obvious. As has been repeatedly stated -- but just as repeatedly ignored: mechanical forces will give rise to natural regularities, but there are CONTINGENT situations where under remarkably similar initial conditions, quite diverse outcomes are possible. That this last is a close as the tossing of a fair vs a loaded die, and as close as the falling that reliably happens when such a die is no longer supported, should tell us just how pervasive and accessible the relevant facts are. So, one is reminded of the parable of Plato's cave: someone has got out, and has returned, inviting his fgellows to join him in gettingup, looking around and seeing the apparatus of projection of the shadow show,t hen to climb out of the cave. But, for the "true believer" in evo mat: reality "cannot" be different from the shadow-show . . . 2] Presumably, the ID position is that intelligence is characterized not only by contingency (can we agree that this means randomness?), but also by directedness. So now ID’s task is to come up with a reasonably unambiguous definition of the distinction between “directed” and “undirected”. Onlookers, observe: at the top-right of this page, there is a glossary. In that glossary is an item on intelligence, and attention was drawn to that item specifically in post no 89. The definition constitutes a citation from a known anti-ID source (a la admission against interest), Wikipedia. namely:
Intelligence – Wikipedia aptly and succinctly defines: “capacities to reason, to plan [which plainly implies foresight and is directly connected to the task of designing], to solve problems [again foresighted and goal directed], to think abstractly, to comprehend ideas, to use language, and to learn.”
Nor is this a strange or unexpected definition: it is immediately recognisable from our experience and observation of our fellow, rational and moral animals. Why, then is there now a pursuit of infinite regress by demanding "definition" of "directedness" vs "undirectedness"? Because of confusion between concept formation through experience of sufficient examples to infer and label a pattern and the role of definition as identifying borders of concepts. We point to experience and observation of concrete cases appealing to family resemblance and the pattern-recognising capacity of the mind. We point out that such is logically prior to precising statements or genus-difference taxonomies, etc. Indeed, we check statements for adequacy against known cases and counter-cases. (And lurking in the backgrounsd is the point that there are some truths that once we as rational-moral animals experience enough and come to understand, we see they must be so; i.e these are self-evident.) So, we point to the cases already given, and ask for interaction with them, a dropped fair vs a dropped loaded die; a fork in the road taken by choice vs at random, design of aircraft, etc. Too often, only to be ignored or dismissed as the other parties rush on to reductio ad absurdum. 3] Is directedness a sub-spectrum of the determinacy spectrum, or is it orthogonal to determinacy? If computer programs are capable of foresight, does the execution of such a program, with a stochastic input, constitute directed contingency? Is directed contingency the same as libertarian free will? Where can I find a usable definition of directed/undirected in the ID literature? See what we mean? Directedness is a subset of contingency, as has both been stated and exemplified. Rob, go get yourself two dice, one loaded, one fair. toss them a few dozen times. What is regular, what is diverse? What is stochastic and what is purposeful and goal-directed? Computer programs are capable of no more foresight than was written into them by their programmers. Stochastic inputs do not change that, they simply give rise to patterns based on the stochastic inputs, e.g Monte Carlo simulations. Whether or no there is libertarian free will as an ontological matter, we observe and experience directed contingency. ID starts with that fact of experience, and anchors itself to that realm. Your favourite rhetorical assertion that ID is assuming what it should not is exposed by the simple exercise of tossing dice, one loaded, one unloaded. Can you tell the difference? Why or why not? [And if we onlookers can see you thus ignoring or rejecting obvious facts, do you not see that you are deducing yourself to absurdities before our eyes?] 4] Rob, 90: JT. If you want to have the same kind of success that the ID movement enjoys, then you need to learn some things from UD denizens. Yes, Galileo: if you want to experience the same success as the Simplicio's of this world, you really need to stop listening to those silly Copernicans. It will only get you into trouble with the Magisterium to keep on raising silly questions about gaps in the well-proven Ptolemaic theory! It has no weaknesses! None! 5] Do you really not understand JT’s points, or are you only pretending to misunderstand them so that you can insult him? . . . . When JT talks about ID’s conception of intelligence, he is referring to the idea commonly expressed by ID proponents, along the lines of “ID says that “intelligence” is not reducible to law, matter and energy.” Rob, the issue is not with "misunderstand[ing]" JT, it is that JT is quite evidently and even obviously reducing himself to absurdity before our shocked eyes; and you are trying to tell us not to believe our "lyin eyes." We are telling you instead: please, stop the intellectual self-destruction! Please. PLEASE . . . ! For example, JT is simplistically and EXPLICITLY equating intelligence with randomness [in so many words, cf above], ending up in the logical fallacy that has now repeatedly been pointed out. e.g. at 96 by bFast:
A is not equal to B, C is not equal to B, Therefore, A is equal to C. Let A = Random. Let B = Deterministic. Let C = Intelligent agency. A (random) is not B (deterministic) C (Intelligence) is not B (deterministic) Therefore A (random) = C(Intelligence).
It is not an "insult" to point out gross error that has serious real-world consequences, and to correct it; but to act responsibly. Nor, to have to be forced -- in the face of insistence on error -- to reluctantly point out that a reductio ad absurdum is in progress. Wish the intellectual self-destruction were not so, but -- sadly -- it is. [. . . ]kairosfocus
March 6, 2009
March
03
Mar
6
06
2009
01:02 AM
1
01
02
AM
PDT
Atom, you and CJYMan are imputing poor thinking to JT on the basis of this statement: "I am saying that the I.D. conception of “intelligence” or “intelligent agency” equates to randomness because ID says it is something distinct from law." It should be clear by now that JT sees law and randomness as complements, which is logically the case if law=determinism and randomness=non-determinism. So if intelligence doesn't fall in the category of law, then it must fall in the category of randomness. Of course, "equates" implies not only that intelligence is random, but also that everything random is intelligent. If JT didn't mean that, then equates was not the right term for him to use. If he meant to say that ID's version of intelligence entails randomness, do you agree with him? Presumably, the ID position is that intelligence is characterized not only by contingency (can we agree that this means randomness?), but also by directedness. So now ID's task is to come up with a reasonably unambiguous definition of the distinction between "directed" and "undirected". Contrast directedness with determinacy, which is well-defined. We can treat determinacy as a boolean variable -- that is, processes are either fully deterministic or they're not. Or we can talk about a continuum with non-deterministic at one end and deterministic at the other. Is directedness a sub-spectrum of the determinacy spectrum, or is it orthogonal to determinacy? If computer programs are capable of foresight, does the execution of such a program, with a stochastic input, constitute directed contingency? Is directed contingency the same as libertarian free will? Where can I find a usable definition of directed/undirected in the ID literature?R0b
March 5, 2009
March
03
Mar
5
05
2009
08:34 PM
8
08
34
PM
PDT
ROb:
If intelligence, according to ID, isn’t reducible to law, and if the term “law” indicates that a process is deterministic, which seems a reasonable interpretation, then intelligence must be non-deterministic. Other terms that are commonly used interchangeably with “non-deterministic” are “stochastic” and “random”.
Lets consider this by example. You have a pool table with some balls on it. You strike the cue ball with the pool cue. From this point on, what happens on the table is governed by law, it is deterministic. In other words, if you programmed the situation into a computer, accurately measuring everything worth measuring, the computer could accurately predict where all of the balls will end up. Now, introduce randomness to the pool table after the cue ball is struck. (I don't know, vibrate the thing in a truly random way.) Your computer program can no longer accurately predict where the balls will end up. Introduce intelligence. The cue ball is struck, but under the table are a bunch of pegs that can be pushed to raise up lumps in the table. Have an intelligent agent guide the balls to where he would have them go. The computer program cannot predict where the balls will go, it is not deterministic. However, it is also not random. CJYman:
A is not equal to B, C is not equal to B, Therefore, A is equal to C.
Let A = Random. Let B = Deterministic. Let C = Intelligent agency. A (random) is not B (deterministic) C (Intelligence) is not B (deterministic) Therefore A (random) = C(Intelligence). Man this conversation is stupid!bFast
March 5, 2009
March
03
Mar
5
05
2009
04:22 PM
4
04
22
PM
PDT
ROb, you wrote this: "Speaking of computers: When a non-interactive program is executing, the physical state of the system evolves according to the laws of physics." and I wrote this: "As he spent month after month working through a Parisian moveable feast, Hemingway found that his Old Man and the Sea evolved very nicely." And yet again, we . . . hey wait a minute! We are sort of finally making some sense! Well, except for the use of the word evolve. Both the novel and the physical state of the computer changed, but they did not evolve except according to the novelist's will and the initial information that was encoded, and I think we know how that type of evolution matches with Neo-Darwinian evolution. . . er, not very well.Tim
March 5, 2009
March
03
Mar
5
05
2009
04:18 PM
4
04
18
PM
PDT
Tim:
Of course, at this point I take back what I wrote, how about you?
No, but I will gladly do so if you tell me how it doesn't make sense. To speak of the reducibility of cats to dogs is a category error. I don't see how speaking of the reducibility of intelligence to law has the same problem. If it does, then you should inform your fellow ID proponents.R0b
March 5, 2009
March
03
Mar
5
05
2009
04:14 PM
4
04
14
PM
PDT
ROb, you wrote this: "If “foresight” makes a process “directed”, then computers are apparently capable of directed contigency, as long they have a random number generator that meets ID’s definition of contingency." And I wrote this: "If bald foreheads make a man sexy, then computers are apparently sexy as long as they have a randomly chosen surface that is shiny and hairless thus somehow matching someone's definition of bald." Again, neither one of us is making any sense, and I retract my statement.Tim
March 5, 2009
March
03
Mar
5
05
2009
04:10 PM
4
04
10
PM
PDT
R0b, I'm sure JT appreciates you coming to his defense, but he really is making bad points. As CJYMan sums up:
Your reasoning ability is horribly lacking at best. You are stating: A is not equal to B, C is not equal to B, Therefore, A is equal to C.
Intelligence is non-deterministic; randomness is non-deterministic; therefore, Intelligence equals randomness. That really is bad thinking, no soft way of saying it. As I pointed out (and others have as well), while Intelligence appears to be non-deterministic, it is also simultaneously directed. KF made this point, I made this point, everyone it seems have made this point, but you guys either miss it or don't understand the importance of it. Again, I offer my simple analogy of two forks in the road: 1) Law (determinism) says "Always take the left road" 2) Randomness says "I will take the left 50% of the time, and the right 50% of the time." ...however... 3) Intelligence says "I will take the path that leads me to the destination I'm headed to." While statistically the left-right choices of an intelligent agent may appear to almost mimick randomness (50-50 split), they don't have to and sometimes will not. They are contingent choices. Randomness is contingent, Intelligence is contingent, but Intelligence != Randomness. Furthermore, there is already a theoretical model dealing with contingent decision-making computation devices: Non-deterministic Automata. I alreayd mentioned this as well. When dealing with NFA's it is implicitly assumed that if an accepted final state can be reached by some possible path, that the NFA will reach it. (In other words, we model that it makes non-deterministic choices, meaning different outcomes for the same state/input combination, and that there is a sense of teleology in that we're seeking out accepted final states.) It almost seems like you're getting frustrated at the conversation, but we're not. (At least I'm not.) I think it's funny how JT is trying to push such a bad point. Continue on. AtomAtom
March 5, 2009
March
03
Mar
5
05
2009
04:03 PM
4
04
03
PM
PDT
ROb, you wrote this: "If intelligence, according to ID, isn’t reducible to law, and if the term “law” indicates that a process is deterministic, which seems a reasonable interpretation, then intelligence must be non-deterministic. Other terms that are commonly used interchangeably with “non-deterministic” are “stochastic” and “random”." And I wrote this: If cats are "not reducible" to dogs and dogs are furry then cats must be non-furry. And neither one of us are making any sense. Of course, at this point I take back what I wrote, how about you?Tim
March 5, 2009
March
03
Mar
5
05
2009
04:02 PM
4
04
02
PM
PDT
kairosfocus:
I think you need to pause and do some learning from those you would object to, or you will simply reduce your case int ever worse depths of reduction to absurdity.
Yes, JT. If you want to have the same kind of success that the ID movement enjoys, then you need to learn some things from UD denizens. kairosfocus, CJYMan, and Tim: Do you really not understand JT's points, or are you only pretending to misunderstand them so that you can insult him? I'll assume the former and make a feeble attempt and explaining them. When JT talks about ID's conception of intelligence, he is referring to the idea commonly expressed by ID proponents, along the lines of "ID says that “intelligence” is not reducible to law, matter and energy." If intelligence, according to ID, isn't reducible to law, and if the term "law" indicates that a process is deterministic, which seems a reasonable interpretation, then intelligence must be non-deterministic. Other terms that are commonly used interchangeably with "non-deterministic" are "stochastic" and "random". Furthermore, "intelligence", "design", and "agency" seem to be related terms in ID terminology, so presumably those terms entail non-determinism also. ID proponents sometimes use the term "contingency", although they don't agree on what they mean by it. Furthermore, they differentiate between "directed" and "undirected" contingency. If "foresight" makes a process "directed", then computers are apparently capable of directed contigency, as long they have a random number generator that meets ID's definition of contingency. Speaking of computers: When a non-interactive program is executing, the physical state of the system evolves according to the laws of physics. This is true regardless of how the system got into the state that included a loaded program that was starting to run, which means that it's independent of the question of who or what designed and programmed it. This is what we mean when we say that computers operate according to law, or law and chance. If there are other points that you don't understand, you might try asking questions instead of hurling insults. If the ID camp wants to improve its status in the research and academic communities, then kairosfocus's advice is better directed to ID proponents. And if anyone thinks that JT's or my usage of ID terms is unreasonable, then they should work on coming up with definitions that don't raise more questions than they answer, and then using the terms consistently.R0b
March 5, 2009
March
03
Mar
5
05
2009
03:45 PM
3
03
45
PM
PDT
Pardon: Accidentally cross-threaded JT: Here is how the UD glossary defines intelligence acceptably for ID purposes:
Intelligence – Wikipedia aptly and succinctly defines: “capacities to reason, to plan [which plainly implies foresight and is directly connected to the task of designing], to solve problems [again foresighted and goal directed], to think abstractly, to comprehend ideas, to use language, and to learn.”
In short, we are not using any unusual or idiosyncratic definition. Indeed, we used the Wiki definition for the excellent reason that it is an admission against interest by an entity known to be strongly opposed to ID, to the point of willful, insistent distortion and slander. That’s about the strongest form of evidence you can get: what intelligence is, is so well and so widely understood, that they could not come up with an “acceptable” definition that would cut off ID at the knees. GEM of TKI PS: JT, to save yourself further embarrassment, kindly take some time out and read the ID glossary and weak argument correctives.kairosfocus
March 5, 2009
March
03
Mar
5
05
2009
12:34 PM
12
12
34
PM
PDT
JT, 85:
When I say that an AI program operates according to chance and necessity I mean It operates according to a program. The ‘chance’ aspect would enter in primarily if there are chance attributes in the program’s [e.g. a robot’s] environment. By saying it operates according to chance and necessity I do not mean that the program fell together by chance.
1 --> Programs work by algorithms, implemented through arbitrary symbolic codes that are dynamically inert but informationally functional; executed physically through specific irreducibly complex architectures, i.e. particular and specific organisations of processors and associated elements. (Pardon a bit of bio, but it is relvant to my point: I got to the stage where I could "read" 6800 and 6809 hex codes directly at one time . . . and hex code for one 6800 system will not work in another one set up with a different memory map, much less a 6502 system (though the hardware was compatible) and certainly not in the architecturally very similar PDP 11; of which the 6800 family was in effect an 8-bit port. Don't even try to go feed a 6800 EPROM over to an 8080 or a Z80! You will "let some smoke out" of the chips for sure!! [I never did "get" that address/data bus thingie . . . even though it was true that A and D fetches are temporally disconnected.]) 2 --> Natural law works by dynamical forces and patterns tracing to strong and weak nuclear forces, electro-magnetic forces and gravitation. A completely different pattern. 3 --> It seems fairly clear, therefore, that the only way you could say the excerpted is because -- sad to have to be direct -- you do not understand the nature and role of information in information systems; especially at decision nodes. 4 --> I have already pointed out that contingency is distinct from lawlike necessity giving rise to natural regularities, and that it may happen in two distinctive, empirically recognnisable ways: (1) undirected, stochastic contingency (chance); (2) purposefully directed contingency (design). 5 --> A program is the latter, including an AI program, and Dr Dembski's explanatory filter is predicated upon that difference. (NB: In the original form, he did not sufficiently emphasise that he is looking at particular isolable aspects of the behaviour of systems or objects; which I and others now have. By integrating the analyses of the various aspects, one may see how the whole operates in ways that bring together chance, necessity and design, without confusing any of the three as -- pardon me -- you have.) _______________ JT, pardon some direct advice: I think you need to pause and do some learning from those you would object to, or you will simply reduce your case int ever worse depths of reduction to absurdity. GEM of TKIkairosfocus
March 5, 2009
March
03
Mar
5
05
2009
12:20 PM
12
12
20
PM
PDT
JT: "I am saying that the I.D. conception of “intelligence” or “intelligent agency” equates to randomness because ID says it is something distinct from law." Your reasoning ability is horribly lacking at best. You are stating: A is not equal to B, C is not equal to B, Therefore, A is equal to C. If you can't figure out the error, I can see why some people here can't continue the discussion with you.CJYman
March 5, 2009
March
03
Mar
5
05
2009
11:31 AM
11
11
31
AM
PDT
1 2 3 4 5 6

Leave a Reply