Uncommon Descent Serving The Intelligent Design Community

Deep Blue Never Is (Blue, That Is)

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In the comment thread to my last post there was a lot of discussion about computers and their relation to intelligence.  This is my understanding about computers.  They are just very powerful calculators, but they do not “think” in any meaningful sense.  By this I mean that computer hardware is nothing but an electro-mechanical device for operating computer software.  Computer software in turn is nothing but a series of “if then” propositions.  These “if then” propositions may be massively complex, but software never rises above an utterly determined “if then” level.    This is a basic Turing Machine analysis. 

This does not necessarily mean that the output of computer software is predictable.  For example, the “then” in response to a particular”if” might be “access a random number generator and insert the number obtained in place of the variable in formula Y.”  “Unpredictable” is not a synonym for “contingent.”  Even if an element of randomness is introduced into the system, however, the way in which the computer will employ that random element is determined. 

Now the $64,000 question is this:  Is the human brain merely an organic computer that in principle operates the same way as my PC?”  In other words, does the Turing Machine also describe the human brain ?  If the brain is just an organic computer, even though human behavior may at some level be unpredictable, it is nevertheless determined, and free will does not exist.  If, on the other hand, it is not, if there is a “mind” that is separate from though connected to, the brain, then free will does exist. 

This issue has been debated endlessly, and I refer everyone to The Spiritual Brain for a much more in depth analysis of this subject.   For my purposes today, I propose to approach the subject via a very simple thought experiment. 

First a definition.  “Qualia” are the subjective responses a person has to objective experience.  Qualia are not the experiences themselves but the way we respond to the experiences.  The color “red” is the classical example.  When light of wavelength X comes into my eye, my brain tells me I am seeing the color red.  The quale (singular of “qualia”) is my subjective experience of the “redness” of red.  Maybe the “redness” of red for me is a kind of warmth.  Other qualia might be the tanginess of a sour taste, the sadness of depression, etc.

Now the experiment:  Consider a computer equiped with a light gathering device and a spectrograph.   When light of wavelength X enters the light gathering device, the spectrograph gives a reading that the light is red.  When this happens the computer is programmed to activate a printer that prints a piece of paper with the following statement on it “I am seeing red.”

I place the computer on my back porch just before sunset, and in a little while the printer is activated and prints a piece of paper that says “I am seeing red.”

 Now I go outside and watch the same sunset.  The reds in the sunset I associate with warmth, by which I mean my subjective reaction to the redness of the reds in the sunset is “warmth.”

1.  Did the computer “see” red?  Obviously yes.

2.  Did I “see” red.  Obviously yes.

3.  Did I have a subjective experiences of the redness of red, i.e., did I experience a qualia?  Obviously yes.

4.  Did the computer have a subjective experience of the redness of red, i.e., did it experience a qualia?  Obviously no.

Conclusion:  The computer registered “red” when red light was present.  My brain registered “red” when red light was present.  Therefore, the computer and my brain are alike in this respect.  However, and here’s the important thing, the computer’s experience of the sunset can be reduced to the functions of its light gathering device and hardware/software.  But my experience of the sunset cannot be reduced to the functions of my eye and brain.  Therefore, I conclude I have a mind which cannot be reduced to the electro-chemical reactions that occur in my brain.

Comments
I, in the first place, take issue with the idea that freewill is constituted by a predetermined set of outcomes rather than ownership of one's decisions. God might say, "The stove is hot, therefore CloseEncounters will not touch it," and I respond, "That's right I'm not going to touch it - I don't want to get burned." So did I lose freewill because God had prior knowledge of my inherent aversion to overbearing temperature? Or did I maintain freewill by *choosing* not to burn my hand? I think we need to re-examine what it means to have freewill.CloseEncounters
January 14, 2008
January
01
Jan
14
14
2008
12:55 PM
12
12
55
PM
PDT
Q: "Would we conclude that the presence of the 9 prime numbers is the result of intelligent agency?" I don't think so. I have not done the computations, but I don't think that the sequence is complex (improbable) enough in the Dembski sense: in other words, it should be well far from the UPB. But if the sequence were, for instance, of the first 10^6 prime numbers, I think that would be very different. In that case, a design inference becomes naturally the best explanation, unless and until a mechanism based on necessity is realistically hypothesized or, better still, proved.gpuccio
January 14, 2008
January
01
Jan
14
14
2008
12:44 PM
12
12
44
PM
PDT
kairosfocus: Thank you for your patience and persistence in trying to stick to fundamental truths, in a discussion which is often tiresome and irritating. I confess that sometimes I am discouraged by the useless complexity that human intelligence is able to create about rather simple issues (that too, I think, is a prerogative of conscious free intelligence). With that, I am not affirming that the "solution" to the fundamental problems of consciousness is easy (indeed, it's just the opposite), but at least it should be possible to define in a simple way the problems and the different ideas in the field. Instead, a lot of ambiguity and self-contradiction always emerges, and that does not help clarify things for those who want to make an intellectual choice in this difficult field. I'll try again to sum up a few important points, in my own view: 1) Consciousness is an empirical fact. It is experienced by each one of us, and that experience is shared through "indirect" ways (communication, language, etc.). Indeed, if we exclude solipsitic positions, everyone of us does not doubt that ithers are conscious in much the same way as he is. But our knoweldge of the phenomenon of consciousness is totally empirical and personal. We are intuitively aware of our personal consciousness: that is our primary knowledge, and to that empirical experience we give a name: consciousness. We can call it with any other name, but the fact we are naming remains the same. Being a result of experience it is, indeed, a fact, not a theory, or a concept, or anything else. A fact. We "are" conscious. Indeed, all other facts, and theories, and many other things, are experienced only "in" consciousness, and as modifications of consciousness itself. So, in a way, consciousness if "the mother of all facts", the supreme fact, the only direct reality we experience. Everything else is, to some degree, "indirect". 2) We have different degrees of certainty of the existence of consciousness: a) absolute certainty for our personal consciousness (we experience it!), although Dawkins and company could object, at least for themselves... b) almost absolute certainty for the consciousness of other human beings (after all, it is an inference, although probably the strongest inference ever made; and yet, solipstists have challenged it). c) various degress of certainty, depending on personal ideas, for the existence of consciousness in other biological beings (very likely at least for superior animals, nut here bthe inference becomes more subjective). d) a generally accepted inference of the absence of consciousness in non biological objects (an inference again, maybe an "argument from incredulity"). e) a recent inference, accepted by many, and rejected by many others, that some special kind of non-biological objects, specially computers, "can" become conscious if their computations are complex enough in a certain, ill defined way (parallel computing? neural networks? loops?). 3) I think the problem we are discussing is indeed the "e)" inference, and not, I hope, the "a)" experience and the "b)" inference. Those who challenge those first two points (and there are many of them) are definitely too weird for me, and I will friendly let them to their world view, wishing them all the best. For the others, let's consider the "e)" inference. Provided that it is indeed an inference, and not a fact, I would like to suggest that it is a very strange and unsupported inference. Probably, the only reason at all for such a bizarre inference is a well rooted faith in two premises: e1) Human beings are conscious (that is, indeed, the b) inference, and I think we all can agree) e2) Human beings are "only" their visible body,which is made of matter just the same as anything else. From these two premises, comes easily: e3) Something in the human body, most likely the structure of the brain, is the cause of consciousness and: e4) If a brain can do it, why not a computer? That's, in few words, the basis of the theory we all know as strong AI. My observations: 1) As everyone can see, strong AI is not really an inference. It is rather a logical deduction from two premises. But, while the first one (e1) is a very well supported inference, the second (e2) is only an unwarranted statement, unsupported by any evidence or logical argument. Or at least, some think it is supported by both, and some (including me) don't agree. But let's assume that those who believe in the "e2" statement have their reasons, more or less acceptable, to do so. Still, the plausibility of strong AI depends exclusively by those reasons, because strong AI is not a scientific inference, but only the logical consequence of the "e2" statement, in other words of a purely materialistic interpretation of human beings, and not the contrary. So, all those who affirm that strong AI, in any of its variants, has demonstrated the purely material nature of humans, are wrong. The opposite is true. If you can demonstrate the purely material nature of humans, strong AI follows inevitably. But there are very strong arguments "against" strong AI, and therefore, indirectly, against its logical premise, that is the purely material nature of humans. First of all, strong AI is the typical theory which is full of self-contradictions, smartly hidden by complex words and unsubstantial concepts. The biggest contradiction is the following: AI theories, and all the information theory in general, maintain (correctly) that, in computations, the results are independent of the hardware. If that is true, and the emergence of consciousness depends purely on the structure of the software, then even an abacus, if complex enough and with the right structure, should become conscious. After all, any computation can be performed on a very big abacus like machine, given enough time and resources. Would that enormous, and very very long, abacus computation become conscious? Other unwarranted fantasies: if a simple computation has no subjective counterpart, why should a sum of simple computations become subjectively aware? I am already hearing the voices: parallel computing (what does it matter? A computation is the same, either you use a serial or a parallel algoritm to make mit); loops (what does it matter? Loops are always used in simple computations, and they are not aware; why should complex loops be aware?); neural networks (again, if a simple neural network is not conscious, why should a bigger one be?); and, finally, emergent properties: ah, that’s really smart; emergent properties and self-organizing processes are really the triumph of materialist metaphysics! You can make anything “emerge”, whatever that means, if you use the right silly words in the right silly context. But, unfortunately, consciousness is not a “property” at all. Consciousness is a fact, the mother of all facts. Properties, on the contrary, are categories of reason and mind, in other words they are very indirect complex human mental entities experienced, ultimately, in consciousness. Or, at least for some emergent properties, in the credulous consciousness of some.gpuccio
January 14, 2008
January
01
Jan
14
14
2008
12:33 PM
12
12
33
PM
PDT
KF, in 129 pointed out, "2] No 127 was by Kairos, who it seems is a European. I am a Jamaican resident in Montserrat. " Double drat! Two gaffes (at least) at about the same time. I'll keep that in mind - kairos is not kairosfocus. (I had been confused when you referred to Kairos in some of your posts!) I apologize to both of you. KF indicates "Unfortunately, this led Q to imagine that there was a basic flaw in the point I was making in that section." Yup. Two different problem spaces accidentally being explained as one. Now I understand what happened. Back to my query about primes. They are an interesting side discussion for this thread, regarding the inference of intelligence. I agree, as has been mentioned, that detection of primes is a good clue that intelligence is involved. It leads to my thought experiment to explore the application of the explanatory filter to observations regarding intelligence: Let's assume an astronomer sees a stellar body, and carefully observes that its color slightly oscillates. He then observes that the oscillation is composed of 9 superimposed frequencies - each a multiple of a base frequency. The periods of those 9 frequencies are 1, 2, 3, 5, 7, 11, 13, 17, 19 times the period of the base. Would we conclude that the presence of the 9 prime numbers is the result of intelligent agency?Q
January 14, 2008
January
01
Jan
14
14
2008
12:16 PM
12
12
16
PM
PDT
Q: In re 126 - 127: 1] Aoology accepted. I just didn't know there was another interpretation out there! 2] No 127 was by Kairos, who it seems is a European. I am a Jamaican resident in Montserrat. 3] In any case, products of prime numbers of sufficient length are used in making hard to break codes, precisely because there is no existing defined algorithm that elegantly and efficiently generates or identifies primes in succession. Wiki, that ever so humble source, notes:
Proving a number is prime is not done (for large numbers) by trial division. Many mathematicians have worked on primality tests for large numbers, often restricted to specific number forms. This includes Pépin's test for Fermat numbers (1877), Proth's theorem (around 1878), the Lucas–Lehmer test for Mersenne numbers (originated 1856),[1] and the generalized Lucas–Lehmer test. More recent algorithms like APRT-CL, ECPP and AKS work on arbitrary numbers but remain much slower. For a long time, prime numbers were thought as having no possible application outside of pure mathematics; this changed in the 1970s when the concepts of public-key cryptography were invented, in which prime numbers formed the basis of the first algorithms such as the RSA cryptosystem algorithm. Since 1951 all the largest known primes have been found by computers. The search for ever larger primes has generated interest outside mathematical circles. The Great Internet Mersenne Prime Search and other distributed computing projects to find large primes have become popular in the last ten to fifteen years, while mathematicians continue to struggle with the theory of primes.
GEM of TKIkairosfocus
January 14, 2008
January
01
Jan
14
14
2008
10:32 AM
10
10
32
AM
PDT
Mr Scott: Thanks for your remark at 124. We were plainly talking at cross-purposes, as the above reveals in light of my App 1 section 6 point xi on my context. Unfortunately, this led Q to imagine that there was a basic flaw in the point I was making in that section. Hopefully, the atrmosphere can now be cleared. And indeed, designing proteins so that they will inter alia cluster and self-assemble into the key mutual working configurations required for life functions, implies a further degree of specified complexity in the dna that codes for them. That just makes it that the more hard to get to the functionality islands in the sea of possible amino acid polymer configurations. GEM of TKIkairosfocus
January 14, 2008
January
01
Jan
14
14
2008
10:25 AM
10
10
25
AM
PDT
kf, in 121 comments "But emissions tracing a sequence of prime nuners of reasonable length couldn’t be due to natural forces. " Maybe I've missed something, but has it been proven that there is no function which can result in a long sequence of prime numbers? (By "proven", I mean shown to be consistent with the accepted hypothesis and theorems. To clear up a question back in 118, that is the context I meant for "prove" in that context.)Q
January 14, 2008
January
01
Jan
14
14
2008
09:48 AM
9
09
48
AM
PDT
KF points out, in 118 "First of all, when I use the abbreviation “IMHCO” I mean “in my humble but CONSIDERED opinion,” which is of course open to correction." Then I apologize for arriving at the wrong, but researched, meaning for your use of IMHCO.Q
January 14, 2008
January
01
Jan
14
14
2008
08:57 AM
8
08
57
AM
PDT
aiguy If, someday, SETI finds some phenomenon and wishes to argue that ET life forms are the best explanation, we can argue about the merits of their argument. If, someday, evolutionary biologists can demonstrate mutation and selection creating some complex organic structure such as a bacterial flagellum, we can argue about the merits of the argument that all complex organic structures originated by the same mechanism. What's good for the goose is good for the gander. Surely you're not proposing that a double standard be enforced, right?DaveScot
January 14, 2008
January
01
Jan
14
14
2008
06:54 AM
6
06
54
AM
PDT
kf I think you're going beyond my point. To reiterate, my point was that properly designed proteins self-assemble into larger complex structures. If the component parts of jet aircraft were of the same nature as proteins then they too would self-assemble. That said, to create parts that self-assemble takes MORE design, not less, more teleology, not less.DaveScot
January 14, 2008
January
01
Jan
14
14
2008
06:27 AM
6
06
27
AM
PDT
Kairos: Maybe you can help us all sort this out? (Complete with an inspirational thought or two . . . just remember to translate the Greek this time!) GEM of TKIkairosfocus
January 14, 2008
January
01
Jan
14
14
2008
02:46 AM
2
02
46
AM
PDT
Onlookers: I still shaking me poor head with astonishment on how reliably a step by step not so hard to follow -- if you will but read -- argument -- is being misread or ignored and the resulting army of strawmen is then knocked down, doused with oil and ignited, clouding the atmosphere with blinding, noxious smoke. All, as I warned of at the head of my always linked, and invited us to a better path:
INTRODUCTION: The raging controversy over inference to design, sadly, too often puts out more heat and blinding, noxious smoke than light. (Worse, some of the attacks to the man and to strawman misrepresentations of the actual technical case for design [and even of the basic definition of design theory] that have now become a routine distracting rhetorical resort and public relations spin tactic of too many of the defenders of the evolutionary materialist paradigm, show that this resort to poisoning the atmosphere of the discussion is in some quarters quite deliberately intended to rhetorically blunt the otherwise plainly telling force of the mounting pile of evidence and issues that make the inference to design a very live contender indeed.) Be that as it may, thanks to the transforming impacts of the ongoing Information Technology revolution, information has now increasingly joined matter, energy, space and time as a recognised fundamental constituent of the cosmos as we experience it. For, it has become increasingly clear over the past sixty years or so, that information is deeply embedded in key dimensions of existence. This holds from the evidently fine-tuned complex organisation of the physics of the cosmos as we observe it, to the intricate nanotechnology of the molecular machinery of life [cf. also J Shapiro here! (NB on AI here, and on Strong vs Weak AI here and here . . . ! )], through the informational requisites of body-plan level biodiversity, on to the origin of mannishness as we experience it, including mind and reasoning, as well as conscience and morals. So, we plainly must frankly and fairly address the question of design as a proposed best current explanation -- and as a paradigm framework for transforming the praxis of science and thought in general, not just technology -- as, it has profound implications for how we see ourselves in our world, indeed (as the intensity of the rhetorical reaction noted just now indicates) it powerfully challenges the dominant evolutionary materialism that still prevails among the West's secularised educated elites. Therefore, it is appropriate for us to now pause and survey the key facts, concepts and issues, drawing out implications as we seek to infer the best explanation for the information-rich world in which we live.
So, can we clear the air and start over, on the merits of the actual issues? Sigh . . . GEM of TKIkairosfocus
January 14, 2008
January
01
Jan
14
14
2008
02:44 AM
2
02
44
AM
PDT
#106 aiguy Archeology does not have any notion at all of some abstract class of entities that philosophers call “intelligent agents”. Archeology is the study of ancient human (and only human) civilizations. Your argument isn't correct. Archeology studies human civilizations (and mainly because they are actually the only one found, but if aliens would have left artifacts archeology woul studi them too) BUT the techniques for recognizing if something is actual a human artifact or a result of natural forces are independent on this restriction. They are basicly techniques for recognizing something that is the result of an intelligent agent from what isn't. …to ET searching There has never been any published scientific inference to an extra-terrestrial life form to explain anything. If, someday, SETI finds some phenomenon and wishes to argue that ET life forms are the best explanation, we can argue about the merits of their argument. If for example they detected a wide-band EM signal emanating from within a pulsar that had prime-number intervals (like the Contact example) I would think most scientists would not accept that life-forms were responsible. Come on Aiguy, don't try to be in contradiction with your expertise and with the word "intelligence" that is present in your nickname. You perfectly know that in that case scientists would be pretty sure that intelligent agents were involved. They know that pulsars are characterized by regular emissions and this was just the reason scientists could argue for its naturla and rotatory nature. But emissions tracing a sequence of prime nuners of reasonable length couldn't be due to natural forces. And after all if this wouldn't be the case the whole SETI project would be scientifically useless.kairos
January 14, 2008
January
01
Jan
14
14
2008
02:31 AM
2
02
31
AM
PDT
PPS: And, that is what Isaiah saw ever so long ago, now. [Notice how I am specifically citing him as a witness to the longstanding human insight on what evident purposefulness points to; just as I earlier cited Cicero on what complex, functional digital information normally calls forth from us: inference to message, not lucky noise. To infer to message in convenient cases and -- without seriously addressing the CSI and explanatory filter issues -- to lucky noise in "inconvenient" ones resting on the same empirically anchored basic probabilistic resources in config spaces challenge, is IMHBCO selective hyperskepticism.] GEM of TKIkairosfocus
January 14, 2008
January
01
Jan
14
14
2008
01:26 AM
1
01
26
AM
PDT
PS: In case you miss my historically relevant literary allusion, consider the following from the prophet Isaiah:
ISA 42:5 This is what God the LORD says-- he who created the heavens and stretched them out, who spread out the earth and all that comes out of it, who gives breath to its people, and life to those who walk on it: . . . . ISA 44:24 "This is what the LORD says-- your Redeemer, who formed you in the womb: I am the LORD, who has made all things, who alone stretched out the heavens, who spread out the earth by myself, . . . . ISA 45:18 For this is what the LORD says-- he who created the heavens, he is God; he who fashioned and made the earth, he founded it; he did not create it to be empty, but formed it to be inhabited-- he says: "I am the LORD, and there is no other.
For, manifestations of purpose are signs of intent-ful mind at work.kairosfocus
January 14, 2008
January
01
Jan
14
14
2008
01:16 AM
1
01
16
AM
PDT
Sigh: First of all, when I use the abbreviation "IMHCO" I mean "in my humble but CONSIDERED opinion," which is of course open to correction. [I was blissfully unaware that there was another interpretation out there . . . especially since I have "always" held that one can make errors and so must be provisional in one's thinking. Indeed, "humble" implies that. I will now, for clarity in this discussion [long since sadly clouded by the smoke of burning strawmen], change to IMHBCO to mark the distinction.] This putting of words into my mouth that don't properly belong there, is sadly symptomatic of the problem that Q manifests -- setting up and knocking over a strawman that has been led to by a red herring. And, that sort of knocking over of strawmen is hardly "calling a spade a spade"! So, next, lest we forget the actual focus-issue: that brings us back to the original post for this thread, as BarryA set it:
In the comment thread to my last post there was a lot of discussion about computers and their relation to intelligence. This is my understanding about computers. They are just very powerful calculators [IMHBCO as one who has designed, built, programmed and debugged such from the ground up chip by chip and machine code by machine code: yes!] , but they do not “think” in any meaningful sense. By this I mean that computer hardware is nothing but an electro-mechanical device for operating computer software. Computer software in turn is nothing but a series of “if then” propositions. These “if then” propositions may be massively complex, but software never rises above an utterly determined “if then” level. This is a basic Turing Machine analysis. This does not necessarily mean that the output of computer software is predictable. For example, the “then” in response to a particular”if” might be “access a random number generator and insert the number obtained in place of the variable in formula Y.” “Unpredictable” [by us!] is not a synonym for “contingent.” Even if an element of randomness is introduced into the system, however, the way in which the computer will employ that random element is determined [in short, the reason for the reasoning lieth elsewhere than in the machine that carries out programmed instructions on input data, step by step]. . . . The computer registered “red” when red light was present. My brain registered “red” when red light was present. Therefore, the computer and my brain are alike in this respect. However, and here’s the important thing, the computer’s experience of the sunset can be reduced to the functions of its light gathering device and hardware/software. But my experience of the sunset cannot be reduced to the functions of my eye and brain. Therefore, I conclude I have a mind which cannot be reduced to the electro-chemical reactions that occur in my brain.
BarryA is right, dead right! Now, on the attempt to blunt the force of my scaled-down to semi-molecular scale version of Sir Fred Hoyle's "tornado in a junkyard expected to form a 747 by chance + necessity alone" example, I respond to the latest -- IMHBCO, sadly selectivley hyperskeptical -- objections as follows: 1] As a proof of new knowledge, they [thought experiments] fail. First, back to Galileo -- I believe, during his time of imprisonment but I stand to be corrected on this:
a --> Consider his U-troughs and metal balls rolling down then "trying" to get back up to their original level as he made the tracks smoother and smoother. b --> He then argued that in a perfectly smooth track, the balls would rise back to their original level. (Have you, Q, ever seen a perfectly smooth and actually friction-free trough? [Or even a friction-free air track or air table?]) c --> He then made the next in-thought extension: flatten out the rising arm, so that the ball is on a smooth in effect infinitely long track and never gets a chance to rise back to its original level. Thus, Galileo arrives at and in so doing warrants in effect Newton's First Law of Motion [i.e., in our terms, of MOMENTUM], the law of inertia - BY EMPIRICALLY ANCHORED THOUGHT EXPERIMENT. (Actually, if memory serves, he mistakenly thought that the ball would go in a circle -- going a bit far with the fact that the Earth has been known since 300 BC to be a sphere.) d --> This brings us to a slippery phrase that as one knowing about scientific inference to best, empirically anchored explanation [IBE], you MUST know is utterly inappropriate to such a context for science: proof of new knowledge. Scientific knowledge of consequence is provisional, and empirically testable and reliable, not "proved." AND THE SLIPPING IN OF SUCH A LOADED CONCEPT TO PREJUDICE THE CASE IN A SITUATION WHERE YOU DON'T WANT TO GO WITH THE IMPLICATIONS OF IBE, IS SELECTIVE HYPERSKEPTICISM.
In turn that brings us to the empirical root of the microjets thought experiment: the diffusion of an ink drop in a glass of water. 2] Of ink drops and microjets and macromolecules in prebiotic soups . . .
[Q, 117:] in your thought experiment of drops of ink in water, and your extrapolation of that process to molecular bonding (”essentially scaling down to quasi-molecular scale”), your premise ignores the biasing effect of electrical charge on the molecular bonding. Quite simply, that extrapolated thought experiment of yours can be shown to be insufficiently developed (i.e. weak premises) to be able to provide sufficiently valuable results.
Not so fast, pardner! e --> As an inspection of the exchange with Dave Scott will rapidly reveal, he mistakenly [obviously he did not read my point xi] thought in terms of a context that I EXPLICITLY was not addressing, proteins clicking together to carry out the biochemistry of life. [In short the relevant components in my thinking are the constituents of the alleged pre-biotic soup, the monomers of life: amino acids, nucleic acids and the like.] f --> As with Thaxton et al, whom as point xi will show, I was explicitly discussing [and it is wise to check a context before making an accusation as strong as you have made, Q], I was speaking to the FORMATION of informational macromolecules: why else do you think I was taking clumping and configuring work, step by step, to show the validity of breaking up dS into dS_clump + dS_config, as TBO did? g --> Indeed, observe how I used "clumping" as a substitute for "chemical work" explicitly, e.g in point xi as excerpted above at 110:
xi] Extending to the case of origin of life, we have cells that use sophisticated machinery to assemble the working macromolecules, direct them to where they should go, and put them to work in a self-replicating, self-maintaining automaton. Clumping work [if you prefer that to TBO’s term chemical work, fine], and configuring work can be identified and applied to the shift in entropy through the same s = k ln W equation.
h --> I then very explicitly applied the parallels, and you did not even have to do a web click to get to them; just you needed to READ before assuming and asserting rhetorically convenient error on my part:
For, first we move from scattered at random in the proposed prebiotic soup, to chained in a macromolecule, then onwards to having particular monomers in specified locations along the chain — constraining accessible volume again and again, and that in order to access observably bio-functional macrostates. Also, s = k ln W, through Brillouin, TBO link to information, viewed as “negentropy,” citing as well Yockey-Wicken’s work and noting on their similar definition of information; i.e this is a natural outcome of the OOL work in the early 1980’s, not a “suspect innovation” of the design thinkers in particular. BTW, the concept complex, specified information is also similarly a product of the work in the OOL field at that time, it is not at all a “suspect innovation” devised by Mr Dembski et al, though of course he has provided a mathematical model for it.
i --> That brings us right back to the force of my summary point in point 3 of 116 [and fixing a typo or two . . . sorry on the old Dyslexia]:
[To get another look at the same physics: Put a drop of ink in a glass of water and see it “dissolve.” How long on average would we have to wait for it to spontaneously come back together, Q, why? And, if the drop were instead parts for our famous little jet, which have to not only be clumped but configured to fly, how long for that to happen by chance, Q?] For, I am essentially scaling down to quasi-molecular scale the point that Hoyle — no mean thermodynamicist! — made in remarking about tornadoes in junkyards and assembling 747’s by lucky correlations of materials and forces that just happened to do the relvant clumpingt andf configuring work: voila — a flyable jumbo-jet. So, why is it that over in Seattle, Boeing doesn’t save money and send in the twisters into the jumbo-jet parts warehouses? [Onlookers, the answer is so obvious that it can only be dodged by cute evasions.]
Sadly as this point sums up, that is just what has happened in 116. 3] you believe/present your thought experiments/extrapolations as stronger than is supported by the scientific process. DaveScot’s example of the flaws in your thought experiment about clustering, earlier in this thread, is one representation of why your confidence/asserted correctness should be tempered. Again, in the CORRECT -- and easily accessible -- original context of my remarks . . .
what is the flaw in seeing that [i]undoing the tendency of diffusion requires clumping work, and that [ii] reliably configuring biofunctional molecules requires highly informationally directed configuring work, as [iii] the config space for long enough macromolecule chains is vastly beyond the probabilistic resources of the observed cosmos?
Indeed, is this not just what good old prof Wiki testifies to in the telling excerpt in 116 above? [Which I must note, also, you have neatly failed to discuss. Onlookers, compare the sequence of italicised words there.] In short, you have tilted at a strawman, and have failed to address not only my EXPLIXCIT contexts, but also evidently overlooked the corroborating citation of a hostile witness. 4] ), your premise ignores the biasing effect of electrical charge on the molecular bonding. Quite simply, that extrapolated thought experiment of yours can be shown to be insufficiently developed (i.e. weak premises) to be able to provide sufficiently valuable results. Dismissal based on a convenient strawman. Please address the actual -- and easily accessible -- argument on the merits. 5] “All thought experiments, however, employ a methodology that is a priori, rather than empirical, in that they do not proceed by observation or physical experiment.” As the very examples of Galileo show, thought experiments are often empirically anchored, and can be quite compelling. So, I close for now by asking three questions to show that I am not indulging in ad hominems but am calling attention to selective hyperskepticism, backed up now by insistence on irrelevant distraction after irrelevant distraction in the teeth of easily accessible evidence and facts:
I --> What is the material difference between diffusion of ink particles and that of essentially similarly sized microjet parts [which per argument can be of similar weight too; say made of smart plastics]? II --> Having clumped the particles [and undone diffusion], what is the essential difference between the need to configure monomers in biofunctional macromolecules in precise order, and to configure microjet parts in precise order to get a flyable jet? [And note he underlying starting context, Sir Fred Hoyle on a tornado in a junkyard forming a flyable 747, scaled down to molecular levels so molecular forces such as those that would have been at work in pebiotic soups can go to work.] III --> What is the essential difference between slightly futuristic nanobots and the biological smart molecules that read-off the DNA code, then step by step assemble a protein by following algorithmic instructions?
And, finally finally [but one!], on the main issue: I argue that the smarts in computers comes form intelligent agents, not from the machines themselves. Further to this, in all cases of observed origin of FSCI, that is also the case. So per induction, we have excellent empirical grounds to infer that in all cases of FSCI, we are well warranted to infer to such agents, unless and until someone can show empirically that lucky noise and/or demonstrably reliable natural laws are generating such FSCI. Worse, if someone does show that there is a law of nature that forces the cosmos as a whole to form sub-cosmi that are life-habitable, thence onward the formation of life and its diversification at body plan level, that would be suggestive indeed as to the origin and purpose of the physical world! GEM of TKIkairosfocus
January 14, 2008
January
01
Jan
14
14
2008
01:00 AM
1
01
00
AM
PDT
KF, in 116, points out that I should know that thought experiments [a term popularised by Einstein, but the ideas are as old as modern physics] are in fact of major and respectable importance in the history of what I suspect is our in-part common discipline, Physics Regarding the history of science, we have no dispute about thought experiments as having "major and respectable importance". But that doesn't give you a free pass to abuse the limitations of thought experiments. As a proof of new knowledge, they fail. As a platform for making a prediction - yes - but the limitation is that follow through is required for it to be science. As a means to extrapolate old knowledge to new scenarios - yes, but with the same limitation as making a prediction. As a means to explain basic concepts, thought experiments have some value, more if they represent an interpolation rather than an extrapolation. The problem, as you so often illustrate by using IMHCO (In My Humble but Correct Opinion) ( http://awads.net/wp/2005/06/22/i-dont-understand-that-internet-jive/ ) is that you believe/present your thought experiments/extrapolations as stronger than is supported by the scientific process. DaveScot's example of the flaws in your thought experiment about clustering, earlier in this thread, is one representation of why your confidence/asserted correctness should be tempered. For example, in your thought experiment of drops of ink in water, and your extrapolation of that process to molecular bonding ("essentially scaling down to quasi-molecular scale"), your premise ignores the biasing effect of electrical charge on the molecular bonding. Quite simply, that extrapolated thought experiment of yours can be shown to be insufficiently developed (i.e. weak premises) to be able to provide sufficiently valuable results. Thought experiments aren't empirical. Since you quote wiki, check here, first paragraph http://en.wikipedia.org/wiki/Thought_experiment . "All thought experiments, however, employ a methodology that is a priori, rather than empirical, in that they do not proceed by observation or physical experiment." KF also says Q is simply being “cleverly” selectively hyperskeptically difficult and evasive. Why are you going ad hominin? It won't improve your argument. Call it hyperskeptical if you will. I call it calling a spade a spade.Q
January 13, 2008
January
01
Jan
13
13
2008
05:35 PM
5
05
35
PM
PDT
H'mm: It seems a few further remarks are in order, especially as Q is IMHCO being just a little cute and evasive -- he should know that thought experiments [a term popularised by Einstein, but the ideas are as old as modern physics] are in fact of major and respectable importance in the history of what I suspect is our in-part common discipline, Physics. For instance, Galileo's cannon and musket ball dropping off the leaning tower of Pisa was probably a thought experiment, indeed the musket ball would lag slightly. Similarly, the famous pulse-timed pendulum in chapel would have shown a bit of variation in period with width of swing. Also, in getting to the principle of inertia he in-thought extended the behaviour of balls in smooth U shaped troughs -- they "try" to get back up to their original level -- by asking what would then "logically" happen with a perfectly smooth trough which was simply flattened out instead of rising back up. Also, observe again that there is the little issue [cf 94 etc, just from me, others have made the same still dodged point . . .] that was long since pointed out but is being ducked in a rush after convenient red herrings and strawmen on his part:
. . . I hold that we directly and as the first undeniable fact of all experience ourselves as intelligent agents, and that we observe one another as similarly intelligent agents. . . . . So, let us put all of this in proportion and keep out of that ever so tempting morass, selective hyper-skepticism.
But first . . . 1] Dave, 114: Have you read “Edge of Evolution”? Behe describes protein binding and used the magnet analogy. Okay, nope -- haven't got around to Behe's EOE yet; I am out in the boonies! [I do note that on the summary of his main point he has shown that the RV + NS mechanisms accessible to malaria bugs has over more generations than there were for mammalia per the usual timelines, only got to a few monomers' worth of relevant shift in key proteins its battle with the antimalaria drugs. Of course Malaria is held to be the biggest selection pressure on the human genome over the past several thousand years, and we have made only a few minor -- but survival-significant -- variations in relevant proteins too. In short, the empirical data backs up the config-space challenge on the vast and unfeasible improbability of getting to highly complex and biofunctional molecules at body plan innovation level by chance innovations and selection filtering on the gamut of life on earth. That is also the message of the Cambrian life revolution. Etc etc.] But also it is now very clear that we are discussing two very different things -- I am principally talking about PROTEIN ASSEMBLY AND DNA ASSEMBLY (as per all the way back to TBO and Denton). Protein folding [post assembly], protein-protein interactions and DNA coiling are largely electrically driven indeed, but the primary configuring issue is not there, it is with getting TO the creation of the cluster of informational macromolecules. If you recall the cell's step-by step, sequential, algorithmic protein assembly procedures, you will see why. Here's good old materialism-leaning prof Wiki as just linked:
Proteins are assembled from amino acids using information encoded in genes. Each protein has its own unique amino acid sequence that is specified by the nucleotide sequence of the gene encoding this protein . . . . Genes encoded in DNA are first transcribed into pre-messenger RNA (mRNA) by proteins such as RNA polymerase [GEM: i.e enzymes, complex proteins themselves, i.e the process loops!] . Most organisms then process the pre-mRNA (also known as a primary transcript) using various forms of post-transcriptional modification to form the mature mRNA, which is then used as a template for protein synthesis by the ribosome. In prokaryotes the mRNA may either be used as soon as it is produced, or be bound by a ribosome after having moved away from the nucleoid. In contrast, eukaryotes make mRNA in the cell nucleus and then translocate it across the nuclear membrane into the cytoplasm, where protein synthesis then takes place. The rate of protein synthesis is higher in prokaryotes than eukaryotes and can reach up to 20 amino acids per second.[6] The process of synthesizing a protein from an mRNA template is known as translation. The mRNA is loaded onto the ribosome and is read three nucleotides at a time by matching each codon to its base pairing anticodon located on a transfer RNA molecule, which carries the amino acid corresponding to the codon it recognizes. The enzyme aminoacyl tRNA synthetase "charges" the tRNA molecules with the correct amino acids. The growing polypeptide is often termed the nascent chain. Proteins are always biosynthesized from N-terminus to C-terminus.
Notice the information and communication technology terms that I have emphasised -- codes, step-by-step algorithms, precise specified sequences etc etc. This is a digital, code-based, algorithmic and communication process -- it only happens to be happening in chemical technology -- the digital information and communication system nature of what is going on is obvious. [And, that is my bailiwick -- thank you, kind and diligent molecular biologists and biochemists etc, for handing to me the basic information by reverse-engineering the algorithm-implementing machinery.] Only, the specific technologies use proteins as a class of swiss-army knife informational molecules and use DNA for memory elements. Instead of registers and ALUs etc look for ribosomes and enzymes etc. But, functionally the processes are the same basic digital techniques that are by now so familiar. All that is left off by prof Wiki is that the typical such prtotein is ~ 300 acids long, requiring about 900 DNA base pairs. 4^ 900 is of course ~ 7.145 * 10^541, well beyond my "stretched" Dembski bound [to take in islands of functionality]. And that underestimates severely the FSCI at work. Now, try to get the required mechanisms assembled from monomers in a pre-biotic soup by chance! [Onward, try to account for say the 100 mn bases to get the body-plan codes for new phyla in say the Cambrian fossil-life revolution.] When the proteins are duly assembled, moved around by heavy kinesin et al and kept in the right places by cytoskeletons, of course they then use their highly informational structures to click together as if by magic. But that is after the really interesting stuff from my perspective has long since happened. My microjets thought experiment illustrates the same process, bringing out in particular that we have to address clumping work and configuring work to get to the required macromolecules to carry out the biofunctions, thence the force of TBO's thermodynamic reasoning [and implied links to information theory]. 2] Q, 115: Imaginary cases can only provide weak premises and weaker conclusions Excuse me -- as I already pointed out by hihglighting several of Galileo's most famous experiments that probably weren't [at least not quite as he reported -- that's being neat and cute with rhetoric but dodging the issues on the merits. The above Wiki cite for responding on Dave's concern should suffice to show what I am driving at on the microjets thought experiment, and that is VERY empirically valid. And BTW, the point of a gedankenexperiment is that it works in accord with the known relevant natural regularities/laws of physics, so that it is an in-principle feasible experiment, just it may not be technically or timewise or financially feasible to do it just now. In short, a good thought experiment brings out the inner logic of the science in a physically conceivable situation, as a test of coherence. [It can also be very fruitful on getting to new theoretical constructs, i.e in hypothesis formation.] For two famous and more modern cases, much of Einstein's original conceptualisation of Relativity was triggered by "taking" an imaginary ride on a beam of light in the context of the expected behaviour of the physics. Kekule's benzene ring snake swallowing its tail is also famous. On the other side of the story, Einstein constructed such a thought experiement to try to undo the uncertainty principle at the famous Copenhagen conference. He proved to be wrong in his initial conclusions, but in the process discovered the energy-time form of the uncertainty principle. Today, scientific visualisation and computer simulation carry out the same basic "what-if" process -- and are not generally regarded as only providing "weak premises and weaker conclusions." 3] It's not just KF, folks . . . But also it's not just me, out here in the boonies imagining nanobots making up micro-jets and asking about whether vats sitting there would spontaneously form such microjets with parts that on average are 1 micron in scale, interact at 10 microns and are separated by on average 1 cm -- 1 million parts in a vat with a cubic metre of fluid -- by the well-known, commonly empirically observed thermodynamics of diffusion. [To get another look at the same physics: Put a drop of ink in a glass of water and see it "dissolve." How long on average would we have to wait for it to spontaneously come back together, Q, why? And, if the drop were instead parts for our famous little jet, which have to not only be clumped but configured to fly, how long for that to happen by chance, Q?] For, I am essentially scaling down to quasi-molecular scale the point that Hoyle -- no mean thermodynamicist! -- made in remarking about tornadoes in junkyards and assembling 747's by lucky corellations of materials and forces that just happened to do the relvant clumpingt andf configuring work: voila -- a flyable jumbo-jet. So, why is it that over in Seattle, Boing doesn't save money and send in the twisters into the jumbo-jet parts warehouses? [Onlookers, the answer is so obvious that it can only be dodged by cute evasions.] In case you don't get the force of the point, here is Dawkins, in the Blind Watchmaker:
Hitting upon the lucky number that opens the bank's safe [NB: cf. here the case in Brown's The Da Vinci Code] is the equivalent, in our analogy, of hurling scrap metal around at random and happening to assemble a Boeing 747. Of all the millions of unique and, with hindsight equally improbable, positions of the combination lock, only one opens the lock. Similarly, of all the millions of unique and, with hindsight equally improbable, arrangements of a heap of junk, only one (or very few) will fly. The uniqueness of the arrangement that flies, or that opens the safe, has nothing to do with hindsight. It is specified in advance. [p. 8.]
Of course Dawkins uses comparisons that vastly understate the required configuration space, and his Mt Improbable type rebuttal has to address the problem of the isolation of the islands of function in the sea of non-functional configs. You have to find your island before you can climb its hills! Here, too, is Robert Shapiro in his recent Sci Am remark on the "popular" RNA world hypothesis:
RNA nucleotides are familiar to chemists because of their abundance in life and their resulting commercial availability. In a form of molecular vitalism, some scientists have presumed that nature has an innate tendency to produce life's building blocks preferentially, rather than the hordes of other molecules that can also be derived from the rules of organic chemistry. This idea drew inspiration from . . . Stanley Miller. He applied a spark discharge to a mixture of simple gases that were then thought to represent the atmosphere of the early Earth. Two amino acids of the set of 20 used to construct proteins were formed in significant quantities, with others from that set present in small amounts . . . more than 80 different amino acids . . . have been identified as components of the Murchison meteorite, which fell in Australia in 1969 . . . By extrapolation of these results, some writers have presumed that all of life's building could be formed with ease in Miller-type experiments and were present in meteorites and other extraterrestrial bodies. This is not the case. A careful examination of the results of the analysis of several meteorites led the scientists who conducted the work to a different conclusion: inanimate nature has a bias toward the formation of molecules made of fewer rather than greater numbers of carbon atoms, and thus shows no partiality in favor of creating the building blocks of our kind of life . . . To rescue the RNA-first concept from this otherwise lethal defect, its advocates have created a discipline called prebiotic synthesis. They have attempted to show that RNA and its components can be prepared in their laboratories in a sequence of carefully controlled reactions, normally carried out in water at temperatures observed on Earth . . . . Unfortunately, neither chemists nor laboratories were present on the early Earth to produce RNA . . . .
Shapiro then goes for the jugular (in a remark that inadvertently also applies to his preferred metabolism first scenario, as TBO pointed out and as my own little thought experiment underscores):
The analogy that comes to mind is that of a golfer, who having played a golf ball through an 18-hole course, then assumed that the ball could also play itself around the course in his absence. He had demonstrated the possibility of the event; it was only necessary to presume that some combination of natural forces (earthquakes, winds, tornadoes and floods, for example) could produce the same result, given enough time. No physical law need be broken for spontaneous RNA formation to happen, but the chances against it are so immense, that the suggestion implies that the non-living world had an innate desire to generate RNA. The majority of origin-of-life scientists who still support the RNA-first theory either accept this concept (implicitly, if not explicitly) or feel that the immensely unfavorable odds were simply overcome by good luck.
In short, onlookers, on the evidence, Q is simply being "cleverly" selectively hyperskeptically difficult and evasive. [I suspect he has an undergrad minor in physics or more, or at any rate sufficient physics and/or chemistry to understand just what I am pointing to in raising the statistical thermodynamics principles-based issues in the above.] GEM of TKIkairosfocus
January 13, 2008
January
01
Jan
13
13
2008
03:08 PM
3
03
08
PM
PDT
KF asks in 113 "“Weak premises” — Such as?" and in the same post mentions "Nice way to put what I said in my imaginary case, ..." You answered it for me. Imaginary cases can only provide weak premises and weaker conclusions. Quite simply, Gedankenexperiments aren't emperical. (http://www.m-w.com/dictionary/gedankenexperiment)Q
January 13, 2008
January
01
Jan
13
13
2008
12:23 PM
12
12
23
PM
PDT
kf Have you read "Edge of Evolution"? Behe describes protein binding and used the magnet analogy. He's Professor of Biochemistry so I'm pretty sure he knows what he's talking about in this regard and he has no reason whatsoever to exagerate the self-assembly capabilities of organic machinery. By the same token it's obvious you are not a biochemistry professor so I'm going to believe Behe - much more so because what Behe describes is my understanding from other reliable sources as well. Any clumping is temporary. Brownian motion knocks the clumps apart and also insures that unattached parts move around until they eventually reach their designated attachment point. Only when a part is fitted where it belongs is Brownian motion overcome so they don't come back apart. If you design machines at the nanometer scale strategically placing binding (and repelling) sites in 3-dimensions they will indeed self-assemble just by putting them in a fluid where they can randomly migrate. You need to fit this into your mental model of how sub-cellular machinery works. This is the basic mechanism of how many drugs and toxins work. They are typically small molecules with precise shapes and binding sites that snap into some much larger protein. The effect of it is to slightly alter the 5-dimensional properties of the target so that the target can no longer bind to what it was supposed to bind to. Sort of like putting sand into a Swiss watch. This explains why it's so difficult to find effective drugs. First it has to target a protein in the bacterial invader that doesn't exist in the host lest it kill them both and then it still has to have the precise 5-dimensional properties so that it snaps onto the target protein and stays there. DaveScot
January 13, 2008
January
01
Jan
13
13
2008
03:56 AM
3
03
56
AM
PDT
H'mm: It is wise to put up a follow up note or two on points picked up by Q and Dave Scott. Meanwhile on the main issue in the blog thread, I observe that, per AmH dict as a witness, the word empirical means:
a. Relying on or derived from observation or experiment: empirical results that supported the hypothesis. b. Verifiable or provable by means of observation or experiment: empirical laws.
It seems to me that our first person experience of ourselves as agents with reasonably reliable minds that manifest intelligence [e.g through producing functional information], and our consistent observation of others as agents fits in under this rubric. So, I think there is excellent reason to hold that no claimed account of intelligence that ignores or cannot credibly ground this fact and its origins on its premises, is a non-starter. [Evo Mat fans, this means you.] Now, on points of follow-up: 1] Q, 109: it is sufficient to express that your analysis went too far on weak premises. “Weak premises” -- Such as? [In short, I am suggesting tehat I have done an inference to best explanation WITHOUT bringing in the Darwinista/Evo Mat selective hyperskepticism, starting from the implications of how we interpret to mesager in the face of the possibility of noise.] Of course I do not offer a proof beyond all rational dispute – nothing in science is that way, and the sudden insistence on “extraordinary proof” when worldview level assertions are in question relative to otherwise obvious empirically anchored evidence, is suspect. 2] The filter is to find which is the best explanation. It is not to find what must be the explanation. That is a significant difference rooted in the epistomology of the problem . . . . your “ask whether” claim is really “test whether the claim passes certain probabilities.” In other words, ID as a theoretical framework for science, isn’t about asking and asserting. It is about testing and demonstrating. Excuse me – have you actually seen what I have repeatedly explicitly said and linked routinely on the subject of science as IBE and of the ID inference as an instance of that? FYI: I have always pointed out that the issue of inference to design across the commonly observed causal factors, chance necessity, agency, is a matter of empirically anchored, provisional inference to best current explanation. This is all science can offer on matters of consequence, and it is why Popper put forward the potential for falsification as a virtue of scientific theorising. In the case excerpted, I have pointed not to absolute impossibility, but to statistical improbabilities so overwhelming that they show the direction of observed spontaneous change in the real world: e.g. diffusion is not normally undoable on a spontaneous basis because of precisely the large difference on statistical weight between the clumped and the dispersed macrostates. In short, you are – I believe inadvertently but understandably [thermodynamics views are sometimes a bit hard to follow] tilting at a strawman, which would make any “fisking” you put up miss the real mark. But then the linked IDEAS page says pretty much the same thing I have, e.g. here [on inference to design across chance-necessity-agency]. Excerpting on Hoyle's tornado in a junkyard case [and in a context taking up Shapiro's recent remarks in Sci Am en passant]:
. . . the significance of FSCI naturally appears in the context of considering the physically and logically possible but vastly improbable creation of a jumbo jet by chance. Instantly, we see that mere random chance acting in a context of blind natural forces is a most unlikely explanation, even though the statistical behaviour of matter under random forces cannot rule it strictly out. But it is so plainly vastly improbable, that, having seen the message -- a flyable jumbo jet -- we then make a fairly easy and highly confident inference to its most likely origin: i.e. it is an intelligent artifact. For, the a posteriori probability of its having originated by chance is obviously minimal -- which we can intuitively recognise, and can in principle quantify . . . . In short, there is a distinct difference and resulting massive, probability-based credibility gap between having components of a complex, information-rich functional system with available energy but no intelligence to direct the energy to construct the system, and getting by the happenstance of "lucky noise," to that system. Physical and logical possibility is not at all to be equated with probabilistic credibility -- especially when there are competing explanations on offer -- here, intelligent agency -- that routinely generate the sort of phenomenon being observed. . . . . through multiplying the many similar familiar cases, we can plainly make a serious argument that FSCI is highly likely to be a "signature" or reliable sign that points to intelligent -- purposeful -- action. [Indeed, there are no known cases where, with independent knowledge of the causal story of the origin of a system, we see that chance forces plus natural regularities without intelligent action has produced systems that exhibit FSCI. On the contrary, in every such known case of the origin of FSCI, we see the common factor of intelligent agency at work.] Consequently, we freely infer on a best and most likely explanation basis [to be further developed below], that: Absent compelling reason to conclude otherwise, when we see FSCI we should infer to the work of an intelligence as its best, most credible and most likely explanation. (And, worldview level question-begging does not constitute such a "compelling reason.")
I think it is fair comment to observe that I have explicitly and even insistently argued in an inference to best explanation context. Indeed, on point iv in the thought experiement comment above, I noted:
iv] In the control vat, we simply leave nature to its course. Q: Will a car, a boat a sub or a jet, etc, or some novel nanotech emerge at random? [Here, we imagine the parts can cling to each other if they get close enough, in some unspecified way, similar to molecular bonding; but that the clinging force is not strong enough at appreciable distances [say 10 microns or more] for them to immediately clump and precipitate instead of diffusing through the medium.] ANS: Logically and physically possible (i.e. this is subtler than having an overt physical force or potential energy barrier blocking the way!) but the equilibrium state will on statistical thermodynamics grounds overwhelmingly dominate — high disorder. Q: Why? A: Because there are so many more accessible scattered state microstates than there are clumped-at -random state ones, or even moreso, functionally configured flyable jet ones.
That is more than clear enough I believe. Not to mention, I have repeatedly remarked and linked on the general nature of science as an empirically anchored, provisional IBE exercise, here. 3] DS, 110: It will if the parts are like proteins and snap together in the proper manner when they get close to each other. You will see that I in fact put this in as a Sci-fi feature of the model, i.e once the parts get within say 10 microns they tend to move together. [In the real world at about 10 molecular diameters, there are increasingly effective attractive forces due to mutual polarisation of electron clouds etc, which then tend to pull molecules together until they begin to push up against each other, at which time strong repulsive forces limit separation. This is reflected in the classical intermolecular forces diagram familiar to I suppose those who have done about up to a freshman physics course; assuming here is sufficient parallel with our own A Level Physics. This is the theoretical basis for e.g. Hooke's law on the almost linear elasticity of materials within limits.] But also, this is just the clumping part of the deal: if we get parts to simply clump spontaneously, they are clumped at random. (And indeed, one can get molecular species relevant to prebiotic soups to clump at random, unfortunately they do not tend to form biofunctional proteins but rather useless tars, as has been pointed out from TBO down to Shapiro's recent Sci Am article.) That's why there is a configuring work term to address as well as the clumping work term. 4] Imagine if the parts to an airplane each had little magnets attached to them so that when you got two parts that belong together in close proximity they snap together the rest of the way by themselves. . . . Nice way to put what I said in my imaginary case, under iv, Q – just after the bit excerpted by Q: [Here, we imagine the parts can cling to each other if they get close enough, in some unspecified way, similar to molecular bonding; but that the clinging force is not strong enough at appreciable distances [say 10 microns or more] for them to immediately clump and precipitate instead of diffusing through the medium.] [The parts are on this view about a micron in size . . .] Can I borrow this? 5] Mismatched magnets repel so wrong parts don’t stick together. With parts like that you really do just have to put them in a fluid, stir it chaotically, and the correct final assembly will emerge. Not quite. Magnets of course couple in two ways and repel in two ways, so it's 50-50 odds on coupling at any point, which is okay: sometimes molecules will bond at a certain point, sometimes they won't. If parts come together at random in the brownian motion etc of the vats, half the time they will stick any old how, half the time they won't. The resulting clumping will be a bit hard to get to the level of all the parts, as the natural tendency will be to spread across the whole vat, so that the average separation between any two parts of interest for the designed configuration will be about 50 cm [i.e. 1 cubic metre of liquid]. That's why I spoke to clumping work. This models the challenge of dilution of the emerging macromolecular species in a real-world pre-biotic soup. [You have to get the macromolecules for the emerging first life form to be close together maybe within about a micron . . .] Next, if there is clumping, it will tend not to be in the relevant configuration – i.e there are a lot more ways for things to be clumped in a mess than in a flyable jet. [This models the issue that macromolecules have to be of the right composition and folding and then have to be fitted together precisely to get biofunctional cells going – all of this taking up a lot of information at genetic and epigenetic levels.] So, at two major levels of highly informed work – [1] clumping of the RIGHT parts and [2] configuring of these parts to work together effectively -- , we won't get to a flyable jet form a dispersed collection of jet parts. [The implications for getting a workable cell to form spontaneously should be fairly clear.] This you saw in your final comment . . .
it just makes the proteins that much more unlikely to arise by chance as they have to be predesigned to have matching binding sites with the correct other proteins and actively avoid binding with incorrect ones. Using the same 5-dimensional method proteins can bind or repel molecules (usually simpler molecules or even individual atoms) that arent’t other proteins.
GEM of TKIkairosfocus
January 13, 2008
January
01
Jan
13
13
2008
01:45 AM
1
01
45
AM
PDT
kf Will a car, a boat a sub or a jet, etc, or some novel nanotech emerge at random? It will if the parts are like proteins and snap together in the proper manner when they get close to each other. Proteins at the nanomolecular scale are quite dissimilar to the larger machine parts that most people are familiar with. Because of electrostatic and hydrophobic/hydrophilic properties on their surfaces they have to be modeled in at least 5 dimensions. Imagine if the parts to an airplane each had little magnets attached to them so that when you got two parts that belong together in close proximity they snap together the rest of the way by themselves. Mismatched magnets repel so wrong parts don't stick together. With parts like that you really do just have to put them in a fluid, stir it chaotically, and the correct final assembly will emerge. That said, it just makes the proteins that much more unlikely to arise by chance as they have to be predesigned to have matching binding sites with the correct other proteins and actively avoid binding with incorrect ones. Using the same 5-dimensional method proteins can bind or repel molecules (usually simpler molecules or even individual atoms) that arent't other proteins.DaveScot
January 12, 2008
January
01
Jan
12
12
2008
11:13 PM
11
11
13
PM
PDT
KF, in one of two posts above mentions "The issue then becomes to ask whther the observed organised complexity is rooted in chance, mechanical necessity showing itself in natural regularities, or agency." I'll pick just that point, because I don't need to fisk your entire always linked on this site, and it is sufficient to express that your analysis went too far on weak premises. What you mention is one of the issues. But, I am strongly suggesting that you are misrepresenting the explanatory filter. The filter is to find which is the best explanation. It is not to find what must be the explanation. That is a significant difference rooted in the epistomology of the problem. Accordingly, the rule for determining the best explanation is based upon the probabilities of each of the explanations. Go here to seem my point: http://www.ideacenter.org/contentmgr/showdetails.php/id/1203. Each of the steps has a probability test. Thus, your "ask whether" claim is really "test whether the claim passes certain probabilities." In other words, ID as a theoretical framework for science, isn't about asking and asserting. It is about testing and demonstrating.Q
January 12, 2008
January
01
Jan
12
12
2008
08:18 PM
8
08
18
PM
PDT
PS: On micro-jets and nanobots, from Appendix 1 section 6, my always linked: ______________ 6] It is worth pausing to now introduce a thought experiment that helps underscore the point, by scaling down to essentially molecular size the tornado- in- a- junkyard- forms- a- jet example raised by Hoyle and mentioned by Dawkins with respect . . . : NANOBOTS & MICRO-JETS THOUGHT EXPT: i] Consider the assembly of a Jumbo Jet, which requires intelligently designed, physical work in all actual observed cases. That is, orderly motions were impressed by forces on selected, sorted parts, in accordance with a complex specification. (I have already contrasted the case of a tornado in a junkyard that it is logically and physically possible can do the same, but the functional configuration[s] are so rare relative to non-functional ones that random search strategies are maximally unlikely to create a flyable jet, i.e. we see here the logic of the 2nd Law of Thermodynamics at work.) ii] Now, let us shrink the example, to a micro-jet so small [~ 1 cm or even smaller] that the parts are susceptible to Brownian motion, i.e they are of about micron scale [for convenience] and act as "large molecules." Let's say there are about a million of them, some the same, some different etc. In principle, possible. Do so also for a car, a boat and a submarine, etc. iii] In several vats of a convenient fluid, each of volume about a cubic metre, decant examples of the differing mixed sets of nano-parts, so that the particles can then move about at random, diffusing through the liquids as they undergo random thermal agitation. iv] In the control vat, we simply leave nature to its course. Q: Will a car, a boat a sub or a jet, etc, or some novel nanotech emerge at random? [Here, we imagine the parts can cling to each other if they get close enough, in some unspecified way, similar to molecular bonding; but that the clinging force is not strong enough at appreciable distances [say 10 microns or more] for them to immediately clump and precipitate instead of diffusing through the medium.] ANS: Logically and physically possible (i.e. this is subtler than having an overt physical force or potential energy barrier blocking the way!) but the equilibrium state will on statistical thermodynamics grounds overwhelmingly dominate — high disorder. Q: Why? A: Because there are so many more accessible scattered state microstates than there are clumped-at -random state ones, or even moreso, functionally configured flyable jet ones. (To explore this concept in more details, cf the overviews here [by Prof Bertrand of U of Missouri, Rolla], and here -- a well done research term paper by a group of students at Singapore's NUS. I have extensively discussed on this case with a contributer to the ARN known as Pixie, here. Pixie: Appreciation for the time & effort expended, though of course you and I have reached very different conclusions.) v] Now, pour in a cooperative army of nanobots into one vat, capable of recognising jet parts and clumping them together haphazardly. [This is of course, work, and it replicates bonding at random. Work is done when forces move their points of application along their lines of action. Thus in addition to the quantity of energy expended, there is also a specificity of resulting spatial rearrangement depending on the cluster of forces that have done the work. This of course reflects the link between work in the physical sense and in the economic sense; thence, also the energy intensity of an economy with a given state of technology. Thereby, too, lies suspended much of the debate over responses to feared climate trends, but that is off topic . . .] Q: After a time, will we be likely to get a flyable nano jet? A: Overwhelmingly, on probability, no. (For, the vat has ~ [10^6]^3 = 10^18 one-micron locational cells, and a million parts or so can be distributed across them in vastly more ways than they could be across say 1 cm or so for an assembled jet etc or even just a clumped together cluster of micro-parts. [a 1 cm cube has in it [10^4]^3 = 10^12 cells, and to confine the nano-parts to that volume obviously sharply reduces the number of accessible cells consistent with the new clumped macrostate.] But also, since the configuration is constrained, i.e the mass in the microjet parts is confined as to accessible volume by clumping, the number of ways the parts may be arranged has fallen sharply relative to the number of ways that the parts could be distributed among the 10^18 cells in the scattered state. (That is, we have here used the nanobots to essentially undo diffusion of the micro-jet parts.) The resulting constraint on spatial distribution of the parts has reduced their entropy of configuration. For, where W is the number of ways that the components may be arranged consistent with an observable macrostate, and since by Boltzmann, entropy, s = k ln W, we see that W has fallen so S too falls on moving from the scattered to the clumped state. vi] For this vat, next remove the random cluster nanobots, and send in the jet assembler nanobots. These recognise the clumped parts, and rearrange them to form a jet, doing configuration work. (What this means is that within the cluster of cells for a clumped state, we now move and confine the parts to those sites consistent with a flyable jet emerging. That is, we are constraining the volume in which the relevant individual parts may be found, even further.) A flyable jet results — a macrostate with a much smaller statistical weight of microstates. We can see that of course there are vastly fewer clumped configurations that are flyable than those that are simply clumped at random, and thus we see that the number of microstates accessible due to the change, [a] scattered --> clumped and now [b] onward --> functionally configured macrostates has fallen sharply, twice in succession. Thus, by Boltzmann's result s = k ln W, we also have seen that the entropy has fallen in succession as we moved form one state to the next, involving a fall in s on clumping, and a further fall on configuring to a functional state; dS tot = dSclump + dS config. [Of course to do that work in any reasonable time or with any reasonable reliability, the nanobots will have to search and exert directed forces in accord with a program, i.e this is by no means a spontaneous change, and it is credible that it is accompanied by a compensating rise in the entropy of the vat as a whole and its surroundings. This thought experiment is by no means a challenge to the second law. But, it does illustrate the implications of the probabilistic reasoning involved in the microscopic view of that law, where we see sharply configured states emerging from much less constrained ones.] vii] In another vat we put in an army of clumping and assembling nanobots, so we go straight to making a jet based on the algorithms that control the nanobots. Since entropy is a state function, we see here that direct assembly is equivalent to clumping and then reassembling from a random “macromolecule” to a configured functional one. That is: dS tot (direct) = dSclump + dS config. viii] Now, let us go back to the vat. For a large collection of vats, let us now use direct microjet assembly nanobots, but in each case we let the control programs vary at random a few bits at a time -– say hit them with noise bits generated by a process tied to a zener noise source. We put the resulting products in competition with the original ones, and if there is an improvement, we allow replacement. Iterate, many, many times. Q: Given the complexity of the relevant software, will we be likely to for instance come up with a hyperspace-capable spacecraft or some other sophisticated and un-anticipated technology? (Justify your answer on probabilistic grounds.) My prediction: we will have to wait longer than the universe exists to get a change that requires information generation (as opposed to information and/or functionality loss) on the scale of 500 – 1000 or more bits. [See the info-generation issue over macroevolution by RM + NS?] ix] Try again, this time to get to even the initial assembly program by chance, starting with random noise on the storage medium. See the abiogenesis/ origin of life issue? x] The micro-jet is of course an energy converting device which exhibits FSCI, and we see from this thought expt why it is that it is utterly improbable on the same grounds as we base the statistical view of the 2nd law of thermodynamics, that it should originate spontaneously by chance and necessity only, without agency. xi] Extending to the case of origin of life, we have cells that use sophisticated machinery to assemble the working macromolecules, direct them to where they should go, and put them to work in a self-replicating, self-maintaining automaton. Clumping work [if you prefer that to TBO’s term chemical work, fine], and configuring work can be identified and applied to the shift in entropy through the same s = k ln W equation. For, first we move from scattered at random in the proposed prebiotic soup, to chained in a macromolecule, then onwards to having particular monomers in specified locations along the chain -- constraining accessible volume again and again, and that in order to access observably bio-functional macrostates. Also, s = k ln W, through Brillouin, TBO link to information, viewed as "negentropy," citing as well Yockey-Wicken’s work and noting on their similar definition of information; i.e this is a natural outcome of the OOL work in the early 1980's, not a "suspect innovation" of the design thinkers in particular. BTW, the concept complex, specified information is also similarly a product of the work in the OOL field at that time, it is not at all a "suspect innovation" devised by Mr Dembski et al, though of course he has provided a mathematical model for it. [ I have also just above pointed to Robertson, on why this link from entropy to information makes sense — and BTW, it also shows why energy converters that use additional knowledge can couple energy in ways that go beyond the Carnot efficiency limit for heat engines.] ______________ In short, the issue is serious, and is not dependent on dubious metaphysical speculation but instead is a matter of the generally accepted and commonly used underlying principles of thermodynamics and commonplace experience. But, since the design inference evidence and message cut clean across the story of origins told by those who dominate key power centres in our culture, it is too often treated with -- always fallacious -- selective hyperskepticism, or worse. Those who do so, need to listen to Cicero again, and consider whether they automatically assume that unless there is independent proof -- how, as that too requires a message! -- all is lucky noise. [Kindly note, too, that refusing to beg the question on the possibility of agency at OOL or OOBPLBD or OO Cosmos (henceforth OOC) --as I address in sections B - D in my always linked -- is to recognise possibilities, not to commit the "error" imagined by Kantians of imposing a dubious metaphysical "assumption" that agency exists. Indeed, as I discussed in my always linked, Kant has made a key little error at the beginning and his dichotomy of the cosmos into noumenal and phenomenal is perniciously self-referentially incoherent; thus, fallacious.] GEM of TKIkairosfocus
January 12, 2008
January
01
Jan
12
12
2008
06:42 PM
6
06
42
PM
PDT
Q: I would reconceptualise. 1] From TBO's TMLO on, the key thing is configurations and clustering. The characteristic objects of ID investigations are subject to multiple [generally speaking, more than 10^150] potential configurations, with significantly different discernible outcomes. Some of these are functional or specific in recognisable and interesting ways. 2] The issue then becomes to ask whther the observed organised complexity is rooted in chance, mechanical necessity showing itself in natural regularities, or agency. 3] These are generally and fairly easily observed causal patterns and may be independently present in given situations -- i.e they are not mutually reducible [consider my favourite falling, tumbling die involved in playing a game]. Natural regularities are like heavy objects tending to fall if not supported. Chance is like the way a die as such an object then comes to rest with one of six faces uppermost. Agency is like our experience of using dice in games, to achieve our purposes. 4] When the config space is such that islands of functionality are credibly less than 1 in 10^150 of the overall space [that is why I use the range 10^150 to 10^300], no reasonable random walk will credibly be able to find such an island, on the gamut of the cosmos's matter and duration. Thus, there is no credible basis for hill-climbing to spontaneously begin, e.g by body-plan level natural selection or its analogues in the pre-biotic world. 5] But we do know by much direct observation and experience, that agents, through understanding the logic of configurations, are able to configure entities to get close-to-function, and do troubleshooting to get the complex object to work. For instance I compose this comment and do cleanup editing on typos. 6] In short it is known that intelligent agents can create FSCI, and why -- and why it is utterly unlikely for chance to do so. High-contingency situations are not dominated by natural regularities. 7] Thus, on inference to best -- and known reliable -- explanation, is agency. The problem is not with the logic or the evidence, but that the implications for certain cases cut clean across dominant parties in the sciences, education, media and power-centres of our civilisation. In part 2 I will give a case in point of the sort of thermodynamics thinking - NB JT -- that underlies this insight. GEM of TKIkairosfocus
January 12, 2008
January
01
Jan
12
12
2008
06:11 PM
6
06
11
PM
PDT
WinglesS, You’ve only named one alternative to ID but X-Force is pretty much the same as ID imo. Yes, that is exactly the point. X-Force theory has all of the same explanatory power, predictive power, and testability as ID theory has, which is absolutely none whatsoever. As it stands now, these are both parodies of useful scientific theories. ID doesn’t have to be bogged down by metaphysical issues like libertarian free will, consciousness, and so on. But once we strip all of this metaphysics away from the meaning of intelligence, nothing at all remains that we can us to make sense of ID theory! Just read what others here say about intelligence - that it does entail free will, qualia, etc. This is the heart of what I object to in ID. Perhaps it would be good for you to list another laternative that isn’t like ID or Darwinian Evolution at all. I think various ideas about structuralism aren't like either ID or evolution, but I don't think they are "alternatives" yet because, like ID, none of these have been fleshed out into theories either. But the important point here is that just because we don't have a good alternative does not make meaningless theories into good science. For example, we do not know proteins manage to get folded into functional 3-D configurations inside of cells - it is a big mystery. Shall I propose that some little tiny invisible intelligent agent resides inside each of our cells, busily folding up proteins? No, that would be a very bad theory, and the fact that no other theory has been accepted doesn't make it any better. (By the way, why doesn't ID assert that intelligent causation is responsible for protein folding? After all, it has already been shown that proteins could never fold themselves just by random chance; it would take years instead of milliseconds or seconds for that to happen!) Perhaps your point that ID shouldn’t be considered science is valid Thanks, WinglesS, I'm very gratified to have made the point. As for your doubts that other scientific disciplines suffer from circular definitions and unsupportable metaphysical claims, I disagree, but as you say that is another discussion. If you think about my example of Newtonian gravity a bit more you might be able to see that when scientists do characterize some hypothetical cause with enough detail, we can test to see if our characterization corresponds to a real cause or not. (And nobody asserts that "abiogenesis" is a theory; it is the phenomena that we need a theory to explain.).aiguy
January 12, 2008
January
01
Jan
12
12
2008
04:24 PM
4
04
24
PM
PDT
aiguy (#64): " (M):Whatever the nature of consciousness, it is ultimately not (just) the body and brain. (A): And let’s see what sort of empirical evidence you might muster to support this view…" As of 1996 the evidence included 61 independent Ganzfeld experiments, 2094 PK experiments using random event generators, and hundreds of other experiments involving tossing dice, dream research, and remote viewing. A lot more has accumulated since. There are high numbers of replications of these and other basic laboratory parapsychological experiments, which allows for meta-analyses of these studies. For instance a meta-analysis on the results of the Stanford Research Institute remote viewing experiments undertaken between 1973 and 1988 returned odds against the hypothesis that the results were due to chance of more than a billion billion to one. These studies were replicated by the Princeton Engineering Anomalies Research Laboratory. (from Radin, 1989, 2006). Meta-analysis is widely used today in  psychology, sociology, and especially medical research (primarily therapy evaluations and epidemiology). A casual look at the British Medical Journal (BMJ) shows literally hundreds of such analyses conducted since 1999. Just one example of the many independent replications is where the subject was EEG correlations in two separated  people (listed below). This is just the tip of an iceberg. -Extrasensory electroencephalographic induction between identical twins, T.D. Duane and T.  Behrend, Science,  vol. 150; 367  (1965)   note: this was in the 1960s before the skeptic barrier was fully up in the mainstream scientific journals -Possible physiological correlates of psi cognition,  C.T. Tart, International Journal of Parapsychology, 5, 375-386 (1963) These two papers generated a stream of conceptual replications by different groups, most of which had positive results. There are too many of these to list completely, but some examples: -Intersubject EEG coherence: is consciousness a field?, D.W. Orme-Johnson, M.C. Dillbeck, R.K. Wallace, and G.S. Landrith, International Journal of Neuroscience, 16,203-209 (1982) -Information transmission under conditions of sensory shielding,  R. Targ and H. Puthoff,  Nature,   252, 602-607 (1974) - EEG correlates to remote light flashes under conditions of sensory shielding,  C.T. Tart, H. Puthoff, R. Targ (eds.), Mind At Large: IEEE Symposia on the nature of extrasensory perception, Hampton Roads Publishing Co. 1979, 2002  -Correlations between brain electrical activities of two spatially separated human subjects,  J. Wackermann, C. Seiter, H. Keibel, and H. Walach,  Neuroscience Letters, 336, 60-64 (2003) -Event-related EEG correlations between isolated human subjects,   D. I. Radin,  Journal of Anternative and Complementary Medicine, 10, 315-324 (2004) Another example of a type of experiment in parapsychology is in the area of human intentional effects on other living organisms such as cell cultures and other animals. Numerous controlled studies have been conducted by legitimate researchers. The following is a short list of some of the most interesting ones. I can give you references if you are interested. -Algae and Psychokinesis C. M. Pleass and N. Dean Dey -Psychokinesis and Bacterial Growth C. B. Nash -Psychokinesis and Fungus Culture J. Barry -Psychokinesis and Red Blood Cells W. Braud, G. Davis and R. Wood -Red Blood Cells and Distant Healing W. Braud -Wound Healing in Mice and Spiritual Healing (& subsequent replication) B. Grad, R. J. Cadoret, G. I. Paul -Malaria in Mice: Expectancy Effects and Psychic Healing G. F. Solfvin -Arousing Anesthetized Mice Through Psychokinesis G. K. Watkins and A. M. Watkins -"A Dog that seems to know when his Owner is Coming Home" R. Sheldrake If you are an open-minded skeptic perhaps you would be willing to peruse some of the general sources of pertinent evidence below. - The Conscious Universe by Dean Radin - Entangled Minds by Dean Radin - Best Evidence: An Investigative Reporter's Three-Year Quest to Uncover the Best Scientific Evidence for ESP, Psychokinesis, Mental Healing, Ghosts and Poltergeists, Dowsing, Mediums, Near Death Experiences, Reincarnation, and Other Impossible Phenomena That Refuse to Disappear by Michael Schmicker - Journal of Scientific Exploration, published by the Society for Scientific Exploration - Mind At Large: Institute of Electrical and Electronics Engineers Symposia on the Nature of Extrasensory Perception (Studies in Consciousness) by Charles C. Tart, Harold E. Puthoff and Russell Targ (Editors) - The Afterlife Experiments: Breakthrough Scientific Evidence of Life After Death by Gary R. Schwartz - Twenty Cases Suggestive of Reincarnation by Ian Stevenson - Near Death Experiences in Survivors of Cardiac Arrest: A Prospective Study in the Netherlands by Dr. Pim van Lommel, in British medical journal The Lancet, Dec. 15 2001 Much of the hard evidence for psi phenomena today is founded on laboratory experiments and not anecdotal evidence. This is just a sample of that body of evidence that simply can't reasonably be dismissed as fraud, trickery or self-delusion. Of course you are free to simply scoff and ignore this information since you know it can't be valid. I would term that selective hyperskepticism. I would add that I believe that much "anecdotal evidence" also cannot reasonably be dismissed - this is in the form of the testimony of vast numbers of ordinary people that esp events that happen to them are real and often have verified information.magnan
January 12, 2008
January
01
Jan
12
12
2008
03:48 PM
3
03
48
PM
PDT
kairos, That’s right, but isn’t that what does actually happen for intelligence? From Archeology... Archeology does not have any notion at all of some abstract class of entities that philosophers call "intelligent agents". Archeology is the study of ancient human (and only human) civilizations. ...to ET searching There has never been any published scientific inference to an extra-terrestrial life form to explain anything. If, someday, SETI finds some phenomenon and wishes to argue that ET life forms are the best explanation, we can argue about the merits of their argument. If for example they detected a wide-band EM signal emanating from within a pulsar that had prime-number intervals (like the Contact example) I would think most scientists would not accept that life-forms were responsible. Are you sure that for Newton’s theory there is a real qualitative difference and not just a quantitative one? Well, sure - Newton's definitions were demonstrably not circular, and did not depend on the truth of dualism or any other untestable proposition. Everyone may legitimately put a so strict definition of what is scientific, but then the same person should coherently be resposible to state that lots of scientific fields are, with that definition, non scientific anymore. I disagree, but I think we should take this one step at a time. Using the very simple definition of science as meaning "Explanations ought to be definable in terms of things that ultimately we can all experience with our senses", I'd like folks to agree that ID fails as science because it can't provide a usable definition of intelligence that will serve ID's needs (i.e. be able to evaluate the claim that intelligence caused life). After that, if you'd like to argue that we can't define the components of Darwinian evolution, or physics, or chemistry, or some other field in this way, we can argue about that. (For example, I think that everybody knows quite well what a mutation is, and what differential reproduction is, and so on; it's just that many here don't believe that these things account for biological complexity. But all that means is that Darwinian evolution is wrong; it does not mean it is unscientific). But we should note ...that biology is a science where there is no generally accepted exception-less necessary and sufficient statement of what “life” is; ... And indeed there are some serious borderline cases. But, rightly, biology is a recognised science. This is a very illustrative point, kairos. You are right - life is notoriously difficult to define. However, there is no biological theory that attempts to explain anything using "life" as the explanation"! If I want to know how slime mold manages to find food, or flowers orient to the sun, we can not merely explain these things by saying "Because they are alive!" - this tells us nothing at all that we didn't already know (that our intuitive category of life seems to apply to slime mold and flowers). In just the same way, if you ask "What caused the flowers to be here in the first place?" and I answer "An intelligent cause!", without saying anything about what "intelligence" is supposed to mean, this tells us nothing at all we didn't know already. Sure, without any real definition we can categorize anything that could cause living things to exist as "intelligent" (including evolutionary processes, if that is what one thinks is responsible), but this doesn't actually say anything substantive about the cause at all. First, let us identify what intelligence is. This is fairly easy: for, we are familiar with it from the characteristic behaviour exhibited by certain known intelligent agents — ourselves. This is exactly the type of circular reasoning I am objecting to! What is intelligence? It is what intelligent agents do. What are intelligent agents? Beings like us - humans. Why do you say humans are intelligent agents? Because they act intelligently.... Specifically, as we know from experience and reflection, such agents take actions and devise and implement strategies that creatively address and solve problems they encounter; a functional pattern that does not depend at all on the identity of the particular agents. In short, intelligence is as intelligence does. For starters, this describes evolutionary processes! Darwinian evolution devises and implements strategies and creatively addresses and solve problems; its basic strategy is trial and error, from which other strategies arise. In fact there are neural Darwinists who propose that evolutionary algorithms underlie human intelligence. I AM NOT PROPOSING THAT THESE THEORIES ARE TRUE. Rather, I am pointing out that your definition of intelligence accomodates the theory that you very much wish to exclude from the meaning of "intelligence". So, if we see evident active, intentional, creative, innovative and adaptive... More problems. First, the word "intentional" has no meaning we can evaluate against empirical evidence. Second, in the context of ID, we can not know if the Designer was innovative and adaptive, or perhaps was merely a one-trick pony as it were: Maybe the Designer could create the life forms we see, but is utterly incapable of doing anything else at all - like an idiot savant (forgive the politically incorrect label). I think this is helpful, kairos, and I applaud you for being willing to take a crack at developing some sort of characterization to give meaning to ID's claim. Hopefully we can see that once one actually attempts to say what we mean by "intelligence", when used as an explanation for life, ID falls apart.aiguy
January 12, 2008
January
01
Jan
12
12
2008
03:11 PM
3
03
11
PM
PDT
KF, Regarding 103 ...AMEN Vividvividblue
January 12, 2008
January
01
Jan
12
12
2008
02:06 PM
2
02
06
PM
PDT
aiguy, I've been thinking about your argument about how "ID fails to provide an independent, operationalized definition of “intelligence”" I suggest a different approach to your concerns than simply repeating your assertion. Specifically, I've been addressing issues of ID's claims regarding probability. Those claims, I suggest, are the first order concerns of ID. Claims about intelligence are only second or third order concerns, at least when viewed through Dembski's explanatory filter. (For my explanation, I'll put intelligence in quotes, to indicate that it is the concept in dispute, and that no specific attributes of intelligence are assumed.) What I mean about "first order concerns" is that the first step of the explanatory filter is to make an observation about something. This observation step does not require "intelligence", as the observation could be by man or machine. At that step, inference is used to determine a probability that the observation could have been caused by natural laws/regularity. That again requires no "intelligence", as the inferense could have come from a universal database following inferential algorithms, and have been performed by man or machine. The next step is to test for chance. This is also a probability test. The test can also be performed by extrapolating our known experiences about chance to determine a probability that this result was caused by random events. The test could be performed according to knowledge-based rules, so requires no "intelligence", and could be performed by man or machine. The third test, for design, could also be based upon previous history. For example, we observe some striations on old arrowheads that haven't been observed to occur on rocks that have never been arrowheads. This again can be a rules-based test, so it need not require "intelligence" to perform the test. So far, working down Dembski's explanatory filter, no "intelligence" is needed to perform the tests. All that is needed is the means to form observations (collect data), access the history of knowledge, extrapolate it according to rules of extrapolation, and arrive at a probability of the observation being a result of some cause. The issue of "intelligence" does enter the problem regarding an understanding of what is design. This is where my argument diverges from yours. I don't care what "intelligence" is. As such, I'm suggesting it is improper to assert that "design" is the result of "intelligence". My position is along the lines of "intelligence is that set of properties that cause results which have a similar probability of occuring as the results of human action." I'm suggesting that intelligence isn't the known property. Instead, it should be considered as a place holder for a set of properties to be evaluated. Kind of like our discussion of force, charge, and field. We don't need to know the mechanism of force. We just need to know that the property we call force is interntally consistent to the use of the term "force". Same for field - we don't need to know the mechanism of a field to understand that some observations are best explained through the existance of a field. In other words, I'm suggesting the understanding of what is "intelligence" is dependent upon the probabilities observed. In ID, that would be the observations that are extrapolated from the observed probabilities about design. I argue against saying that certain events probably result because we "know" that intelligence was involved. With this approach, we can now try to fill out the properties that would go into the place holder of "intelligence". Using "intelligence" as the class of post-hoc explanations of observations is the most consistent with the theories of ID, especially of the explanatory filter, I'm suggesting.Q
January 12, 2008
January
01
Jan
12
12
2008
12:34 PM
12
12
34
PM
PDT
1 2 3 4 5 7

Leave a Reply