Uncommon Descent Serving The Intelligent Design Community

What is Intelligence?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In a previous UD discussion I started about incompleteness I made the following affirmation: intelligence and life are not computable. A commenter kindly asked me to provide justifications for my claim. Since at UD usually I try to separate different topics in different discussions, to be more focused and reader-friendly as possible, so here is my answer in a dedicated thread. My answer unavoidably implies to investigate first what intelligence is then what life is (given the latter is an effect and the former is its cause).

A premise: intelligent design theory (IDT) per se doesn’t deal with the deep nature of intelligence or the designer. For what matters in IDT intelligence and designer can be considered as source of information. All basic results of this theory hold true when in its statements we substitute “intelligence” or “designer” or “intelligent cause” with “source of information”. This makes sense because the job of IDT is limited to investigate the signs or outputs (CSI, IC …) evidencing the inputs provided by information sources. In a sense IDT focuses on effects rather than the ultimate meaning of their cause. Despite that here I will try briefly to address something of the nature of intelligence, to satisfy the commenter’s request of explanations.

Countless definitions of intelligence were provided by philosophers and scientists according to different points of view. This very fact is sign that intelligence is complex, multi-faced and controversial topic. It’s likely any of those definitions contains some truth. However among them the pragmatic definitions cover its lower aspects only. In fact to consider intelligence as a mere tool to solve practical problems means to limit the power of intelligence to the material world and lower its ontological status to modest dimensions. We will see below that quite different appears the rank of intelligence when the problem of knowledge in its highest sense is considered. On the ground of narrow pragmatism even there may be no particular controversy between an IDer and a Darwinist. An IDer can well agree with the definitions provided by evolutionists, for example, this one by Stephen Jay Gould (“The Mismeasure of Man”): “Intelligence is the ability to face [and solve] problems in an unprogrammed creative manner”. Gould also rightly added that intelligence cannot be adequately measured (causing the ire of many psychologists). Gould’s remarks about intelligence’s unprogrammability and unmeasurability might also be in relation to the previous UD post I referenced at the beginning and in a sense agree with the thesis I am going to defend here, given the relations between the concepts of measure and computation.

Before to examine a couple of definitions of intelligence I will discuss here I must clarify what I mean for computation in this context: a deterministic finite series of instructions or operations sequentially applied to a finite set of objects. Given this definition, a computation is a mechanistic process that a machine can work out. In computability theory the archetype of such machine is the so-called Turing machine (TM).

Intelligence as generator of what is incomputable

IDT shows that CSI cannot be generated by chance and necessity (randomness and laws). An algorithm (which is a generalization of law) can output only what is computable and CSI is not. The concept of intelligence as “generator of CSI” can be generalized as “generator of what is incomputable”. Obviously, needless to say, intelligence eventually can generate also what is computable (in fact what can do more can do less). Intelligence can work as a machine but a machine cannot work as intelligence. Between the two there is a non invertible relation. This is the reason why intelligence designs machines and the inverse is impossible. To consider intelligence as “generator of what is incomputable” makes sense because we know that intelligence is able for instance to develop math. Metamathematics (Gödel theorems) states that math is in general incomputable. It establishes limits to the mechanistic deducibility but doesn’t establish limits to the intelligence and creativity of mathematicians.

Now it’s straightforward to see that the generator of what is incomputable is incomputable. Let’s hypothesize that it is computable, i.e. can be generated by a TM. If this TM can generate it and in turn it can generate what is incomputable then, given that an output of an output is an output, this TM could compute what is incomputable and this is a contradiction. Since we get a contradiction the premise is untrue, then intelligence is not computable.

The Infinite Information Source (IIS)

Now let’s pass to another more demanding but more deep perspective on our topic: intelligence as interface or link between any intelligent being and what we could call the “Infinite Information Source”. IIS is an aspect of the Metaphysical Infinity (or Total Possibility) that contains all and then contains all information too. Outside IIS there is no information because there is nothing at all. The existence of the IIS is a logical inference. In fact it is common evidence that intelligent beings (humans) routinely produce new information. This production is not creation from nothingness because from nothingness nothing comes, then this information must come from a higher source than the intelligent beings themselves. In a sense never there is new information. Besides we know from our repeated experience that intelligent beings share common information (in two senses: as information they contain inside themselves and as information they know). This proves that intelligent beings share the same higher source of information.

It remains to show that this higher source (say it S) is the IIS. The demonstration is for absurdum:

(1) Let’s hypothesize that S is finite. With “finite” I mean non Infinite (i.e. “non containing all information”). As such S is different from IIS.
(2) Since S is finite let’s consider its complement set ~S containing all information not belonging to S. Obviously ~S is included into IIS.
(3) S and ~S are disjoint sets for definition.
(4) Now consider an information ‘a’ of S and an information ‘b’ of ~S.
(5) If a and b are information, also c = (a AND b) is information.
(6) The question is: c belongs to S or ~S? It cannot belong to both because they are disjoint sets.
(7) Let’s hypothesize c belongs to S. Then S contains ‘b’, contrary to #4. Then this hypothesis is untrue.
(8) From #7 we have that c must belong to ~S. Then ~S contains ‘a’, contrary to #4. Also this hypothesis is untrue.
(9) Since we have obtained a contradiction the premise #1 is false. S is IIS.

At this point we have three basic elements in the scenario: the IIS, the being and what connects them (the channel through which information passes from the former to the latter, like a stream from a source to a sink). A classic symbolism that can help to understand their relation is the Sun that creates an image on the surface of water. The Sun is the IIS, the image is the intelligent living being and the beam connecting the Sun to its image is the channel (over-individual intellect). As the Sun is the cause of its image on water (which wouldn’t exist without it) the IIS is the cause of the intelligent living being. In particular, the intersection of the beam with the plane of our layer of existence, causes at the center of the human state the arise of human soul or psyche (with all its faculties: mind, reason, consciousness, thought, free will, emotions, sentiments …). The intersections of the beam with the center of other layers of existence cause different faculties of knowledge to other non human beings. The vertical hierarchical stack of all parallel planes represents symbolically all multiple states of being. The physical body is only the last by-product, the final unproductive production in the causality chain from IIS to matter. Warning: here the Sun is only a symbol for the knowledge’s source (traditionally light was always symbol of knowledge); obviously intelligence doesn’t really come from the physical Sun and soul is not a reverberation upon physical water. I say this because in a previous discussion about thermodynamics I defended the obvious position that the Sun does not send us information, rather energy only.

IIS is eminently incomputable because it is even un-derivable from any system (and all what comes from the development of its potentialities). In fact any system F leaves outside all what is “non F”. IIS leaves outside nothing then IIS is in principle absolutely unachievable by any systematization. Continuing the Sun’s symbolism, as the beam’s light is not really different from the source’s light, so also intelligence participates of the incomputability of IIS.

The above proof evidences also another only-seemingly odd thing: the IIS is not properly composed of parts because when we, for hypothesis, divide it into parts we obtain contradictions. It is our analytic reason that divides IIS in parts, which really don’t exist distinctively in IIS because it is eminently synthetic. IIS is essentially indivisible, and this necessarily excludes any composition and entails the absolute impossibility to be conceived as composed of parts. IIS is an aspect of the Absolute and the Absolute cannot have relations whatsoever with the relative. Since IIS really has no parts, also the link and the linked being are only apparently its “parts” and at the very end are the IIS itself. As such they directly participate of the incomputability of IIS. Again we have got the same deduction.

The same conclusion is got from yet another point of view. Let’s suppose that we find a finite process outputting intelligence. At this point nobody can a priori avoid or exclude that, through its link to IIS, intelligence receives some data that the finite process is unable to output. One can express this situation by saying that intelligence is “open” to Infinity, while, to be computable, a thing must be “closed”. Its “opening” makes intelligence virtually infinite. This is only another way to state the fundamental principle of “universal intelligibility” that sounds: there is nothing of really unknowable, all things are in principle knowable. Of course there may be countless things actually unknown to an intelligent being. But this is only a de facto temporary situation not an in principio definitive destiny. Thus we see that, as I noted above, intelligence is something far more powerful and higher than a simple tool for solving practical problems, because virtually can know all. Since intelligence is virtually the knower of all what is incomputable, in turn it cannot be computable because the knower cannot be lower in rank than the known.

Given we are dealing with universal intelligibility it is necessary to clear a possible misunderstanding. To avoid it we must carefully distinguish reason and intellect. This distinction, which was well clear to most ancient philosophers, was lost at the arise of rationalism and humanism in the modern era. As someone said: “it was reason to betray intellect”. The first product of rationalism in the scientific field was Cartesian mechanicism, which is in relation with computability I deny here when applied to intelligence and life. Reason is merely an individual human faculty. It is a discursive indirect form of analytical knowledge that takes as support logic and argumentative tools. Reason cannot be universalized as is. Quite differently intellect is a higher universal faculty of direct synthetic knowledge pertaining to all states of being. This explains because with the arise of rationalism and humanism the knowledge of universal principles (as the Metaphysical Infinity) was lost: what is universal can be known only by a universal faculty. Reason is only the lower individual part of intelligence (the horizontal image), while intellect is its higher over-individual part (the vertical beam). Intellect is over-rational. Warning: over-rational is not at all irrational as some believe! The universal intelligibility makes sense only when addressed by intellect. If we remain on the plane of human reason, there is no universal intelligibility. In other words it is not reason to be omniscient and there is no such thing as universal reasonability.

The key point to focus is that all the above definitions of intelligence agree and support each other. They are consistent because represent different viewpoints of the same reality. Hence also the respective demonstrations of incomputability show the same impossibility seen from different perspectives. The above argument has corollaries. The incomputability of intelligence and its non mechanistic nature debunks once and for all any illusion of the so-called Artificial Intelligence to create real intelligence. The IIS can be considered an aspect (expressed in term of information) of the Universal Intelligence or Divine Intellect and since it is also the Source of the universe, which is a design, the symbolism of the Great Designer can be applied to it.

Life as carrier of intelligence

Now let’s consider life (specifically life of conscious living beings) and give of it the following definition: the physical carrier or support of intelligence, what allows intelligence to manifest and operate on the physical plane. If the carrier (living soul and body) were mechanistic only they could not adequately express intelligence, which is not mechanistic. It is a claim of IDT that physical signs manifest the non physical nature of intelligence. These signs (CSI, IC, etc.) are non mechanistic and what displays such signs cannot be mechanistic too. Living soul and body display such signs and then we can conclude that life is non mechanistic.

To illustrate with an example the concept, let’s consider a clear manifestation of intelligence in a living being: language. Also Noam Chomsky admits that language is structural and hardwired in its physical carrier, the brain. Language is not mechanistic: the high expressions of literature cannot be created by a machine. The classic objection of materialists to this claim is: also machines can output literature works. Machines can output texts (strings of characters), but their outputs fully lack meaning and indeed this proves that they are not true manifestation of intelligence (which is the only source of meanings). For instance, when a writer writes the four-chars word “love” he has in mind all the meanings of the idea, instead when a machine writes “love” it has nothing in mind for the simple fact that it has no mind. And here what stays in the “background” (the semantic) is more important and essential of what stays in the “foreground” (the syntax), so to speak. Moreover if a machine writes “love” it is because was programmed to do so, not because it wanted that (as a human writer does). Just a curiosity: an ancient Hebrew legend speaks of the Golem, a sort of automaton that they say Cabalists were able to vitalize by mean of esoteric rituals. The Golem was able to simulate a living being (a robot ante litteram) but it wasn’t able to speak because language is an advanced ability that only real intelligent living beings have.

Of course all that I have written here is light-years from the materialist and reductionist Darwin’s idea of “thought, being a secretion of the brain”. Modern evolutionists believe to be more sophisticated saying that “thought is emergent property of the brain”. But if we examine it their claim is not more explicative. In fact emergent properties involving information (and mind eminently implies information) don’t spontaneously “emerge” from the bottom like a secretion (as they think) but come from the top, from an intelligent source. About this topic see my previous post on emergence.

To sum up about intelligence (like many other things) we are before two diametrically opposite worldviews: the ID non materialist and the materialist (with all its consequence, evolutionism included). The former is a top-down worldview while the latter is a bottom-up approach. Non materialism states that matter itself comes from information. Materialism, at the very end, denying any higher principle than matter, believes that information arose from matter. These two opposite worldviews cannot be both true. I hope these brief notes may help some to know which of them is on the side of truth.

Comments
In biology CSI refers to biological function. IOW not any DNA sequence will do.Joseph
December 3, 2009
December
12
Dec
3
03
2009
04:31 AM
4
04
31
AM
PDT
JT #63-64
If S is some set of axiomatic statements plus all statements logically derivable from those statements, then ~S is just every other possible statement. This means that if S contains ‘a’ but does not contain ‘b’, then ~S will contain b as well as ‘a AND b’, but does not contain ‘a’, because ~S is not required to contain all statements logically derivable from those it contains. ~S by defnition is just everything that S does not contain. Thus your premise 7 is wrong, and your proof as well. excuse me, its your premise 8 that is wrong.
(A) "S is some set of axiomatic statements plus all statements logically derivable" - this is not my definition of S. (B) "If S contains ‘a’ but does not contain ‘b’, then ~S will contain b as well as ‘a AND b’" - the question if ~S contains c or does not contain c is exactly what is at issue in my step #6, then you state something yet undefined. Given A and B your "your premise 8 that is wrong" is a non sequitur then my proof is ok.niwrad
December 3, 2009
December
12
Dec
3
03
2009
01:55 AM
1
01
55
AM
PDT
JT, you've explained niwrad's fallacy perfectly. His reductio ad absurdum is invalid because it contains hidden premises, namely that S and ~S are each closed under logical derivation. Eliminate the premise that ~S is closed under logical derivation, and no absurdum follows from the remaining premises, as you show. But if the proof were valid, we could come up with all kinds of fun corollaries. If S=IIS for all finite sets S, then it would follow that IIS is simultaneously equal to every finite set, including the empty set. So it follows that information doesn't exist, and therefore this thread doesn't exist.R0b
December 2, 2009
December
12
Dec
2
02
2009
04:45 PM
4
04
45
PM
PDT
excuse me, its your premise 8 that is wrong.JT
December 2, 2009
December
12
Dec
2
02
2009
03:52 PM
3
03
52
PM
PDT
OK I think I have the answer now: if S is some set of axiomatic statements plus all statements logically derivable from those statements, then ~S is just every other possible statement. This means that if S contains 'a' but does not contain 'b', then ~S will contain b as well as 'a AND b', but does not contain 'a', because ~S is not required to contain all statements logically derivable from those it contains. ~S by defnition is just everything that S does not contain. Thus your premise 7 is wrong, and your proof as well.JT
December 2, 2009
December
12
Dec
2
02
2009
03:51 PM
3
03
51
PM
PDT
I'm trying to figure out if that proof was an unnecessary elaboration of a kind of vacuous point or not. Are you saying that any information expressed by an intellgent being has to be a member of the set of all information. Or rather that the immediate source for any information expressed by an intelligent being of necessity has to be the set of all information.JT
December 2, 2009
December
12
Dec
2
02
2009
03:21 PM
3
03
21
PM
PDT
JT #59 With c = (a AND b) I mean logical conjunction of predicates. If c is true in a set, in that set a AND b are true. Your examples (bitwise AND, chemical compounds, colors fusion) fail to meet this criterion and as a consequence are not counter-examples of my proof.niwrad
December 2, 2009
December
12
Dec
2
02
2009
02:05 PM
2
02
05
PM
PDT
niwrad:
CSI is a specific measure of complexity of a system (among many others). As any measure it is computable for definition (I agree on this). But the measure of a thing is not the thing itself!
But I'm not arguing for the calculability of CSI measures. I'm saying that the systems themselves, insofar as they can be represented formally, are computable.R0b
December 2, 2009
December
12
Dec
2
02
2009
01:11 PM
1
01
11
PM
PDT
niwrad:
It remains to show that this higher source (say it S) is the IIS. The demonstration is for absurdum: (1) Let’s hypothesize that S is finite. With “finite” I mean non Infinite (i.e. “non containing all information”). As such S is different from IIS. (2) Since S is finite let’s consider its complement set ~S containing all information not belonging to S. Obviously ~S is included into IIS. (3) S and ~S are disjoint sets for definition. (4) Now consider an information ‘a’ of S and an information ‘b’ of ~S. (5) If a and b are information, also c = (a AND b) is information. (6) The question is: c belongs to S or ~S? It cannot belong to both because they are disjoint sets. (7) Let’s hypothesize c belongs to S. Then S contains ‘b’, contrary to #4. Then this hypothesis is untrue. (8) From #7 we have that c must belong to ~S. Then ~S contains ‘a’, contrary to #4. Also this hypothesis is untrue. (9) Since we have obtained a contradiction the premise #1 is false. S is IIS.
You seem to assume (in 7,8) that if a set contains the member 'a AND b' then it must of necessity contain the two additional members a, b. Say its bitwise AND. if a = 1100 and B = 1010 then (a AND b) = 1000. if S contains 1000 does that mean it has to contain 1100 as well? If you have some process that can detect the presence of salt, does the set of things it can detect necessarily include elemental sodium and chlorine. If you can detect the color green does that mean you can detect the color blue and yellow as well. If you know a set of facts through experience does that mean you know all facts logically derivable from that set (say through any combination of AND, OR and NOT).JT
December 2, 2009
December
12
Dec
2
02
2009
12:44 PM
12
12
44
PM
PDT
R0b #38
The fact is that CSI is necessarily computable, by definition.
Perhaps you fail to see the consistency of my arguments for the following misunderstanding. When I say "intelligence and life are non-computable" I have in mind that their production, content and nature are not computable, i.e. non mechanistically generable. Analogously when I say "CSI is non-computable" I mean: the system, which CSI measure of complexity is x, has production, content and nature not computable, i.e. non mechanistically generable. CSI is a specific measure of complexity of a system (among many others). As any measure it is computable for definition (I agree on this). But the measure of a thing is not the thing itself! A thing can be non-computable, nevertheless we can compute many measures of it. To measure means to reduce to quantity. Since intelligence and life are essentially qualitative they cannot be reduced to quantity. Their measures are necessarily defective in principle. This doesn’t mean we cannot get approximate measures of them but they will necessarily be … computations of what is non-computable (so to speak). So while I have in mind the production, content and nature of systems (and argue these production, content and nature are non-computable) you have in mind their CSI measures (and argue these measures are computable for definition). If this is the situation we are both true, simply we are arguing from different points of view.niwrad
December 2, 2009
December
12
Dec
2
02
2009
12:36 PM
12
12
36
PM
PDT
CJYman, I'll also reiterate one of Mustela's crucial questions: How is a given stretch of DNA an independent pattern? The question highlights the problem with claiming that T is specified by virtue of T being functional. The only specifying agents I know of are humans, which have a biased interest in functionality. Natural selection, by definition, is also biased toward functionality. So there's no guarantee that the probability of a given function being in our "side information" is conditionally independent of the probability of that function evolving. For example, can we honestly say that the evolution of propellers for bacteria is probabilistically independent of humans being familiar with the concept of a propeller? Of course not. The probabilities of the two events are biased by a common factor -- namely the benefits of motility.R0b
December 2, 2009
December
12
Dec
2
02
2009
12:13 PM
12
12
13
PM
PDT
Graham #36 To explain in few words what intelligence is I had necessarily to resort to elements from the traditional doctrine (in particular the metaphysical theory of the multiple states of the being and some related symbolisms) that are topics requiring entire books. The statement you quoted has meaning only in the framework of that doctrine, which I cannot even imagine to pretend to elaborate here. However here are some additional ultra-simplified notes. According to the traditional theory of the multiple states of the being the universal existence is considered divided in states or levels (symbolically represented as horizontal planes). Each of them is defined by its own conditions, modalities and limits. The stack of all states is hierarchized according to the defining conditions of the states (less limits imply higher rank because less limits mean more possibilities and power). One of the main distinctions among states is between individual (lower) states and over-individual (higher) states. Human beings occupy one of the individual states of existence. Since all states come from the First Cause or Source one can consider in each plane a center representing where the Cause acts. As a consequence we can imagine all these centers as crossed by a unique vertical straight line starting from the Source. Symbolically this Source was often represented as the physical Sun and the straight line as a light beam generating an image at each center. Since the Cause operates and causes by mean of intelligence the vertical beam represents the Universal Intellect, which all states share and which all their faculties of knowledge come from. Anyway to delve seriously into these topics I recommend reading at least the following books by René Guénon: "The Multiple States of the Being", "The Symbolism of the Cross" and "Man and His Becoming according to the Vedanta".niwrad
December 2, 2009
December
12
Dec
2
02
2009
11:56 AM
11
11
56
AM
PDT
CJYman, thank you for your very responsive comments. I started to write a response, got pulled away, and came back to find my thunder stolen by Mustela and Zachriel. I'll throw my now-redundant comments into the mix anyway. In Dembski's flagellum example in his latest account of CSI, he says that H is "the relevant chance hypothesis that takes into account Darwinian and other material mechanisms". You instead chose a uniform distribution for H. To your credit, you state this assumption explicitly, and you explain your justification for it, which I applaud. But this raises a few issues. CSI is technically a property of events, not physical objects. When we talk about the CSI of an object, we're actually referring to the CSI of the origination of that object. What, then, constitutes the origination event of a stretch of DNA? Presumably it includes the evolution process over many generations, but does it also include the formation of the constituent atoms, the formation of the stars that formed those atoms, the advent of gravity that formed those stars, etc.? The choice is arbitrary, so it seems that the CSI of any given object is ill-defined. And I don't think that Marks and Dembski's work provides sufficient justification for assuming uniformity. You say:
The point is that according to Marks and Dembski’s recent work on active information, it is just as improbable to find something such as CSI as it is to generate a set of laws that would create an evolutionary algorithm to produce an incremental pathway to that CSI in question.
Actually they say that the latter is more improbable, which is what I think you meant to say. But you have to consider the assumptions on which their math is based, which have no connection to empirical science. In essence, they concoct an imaginary reality in which everything is ultimately uniformly random. It's obvious that biological organisms cannot emerge from such a state of affairs -- we don't need Marks and Dembski's math in order to understand that. As Mustela has noted, science has to be grounded in the reality that we observe around us, not in Marks and Dembski's promotion of the Principle of Indifference from a methodological principle to a metaphysical claim.R0b
December 2, 2009
December
12
Dec
2
02
2009
11:29 AM
11
11
29
AM
PDT
Zachriel:
If CSI is to have any scientific value as a metric, it has to have a reasonably consistent result without regard to the background knowledge of the observer.
Scientific inferences rely heavily on the background knowledge of the observer.Joseph
December 2, 2009
December
12
Dec
2
02
2009
11:25 AM
11
11
25
AM
PDT
CJYman: Which could easily apply to many things such as the age and size of the universe, or whether it even had a beginning or not.
Sure, we measure the height of a tree and it's off by a factor of 2^34,350 because we learn something new. If CSI is to have any scientific value as a metric, it has to have a reasonably consistent result without regard to the background knowledge of the observer.Zachriel
December 2, 2009
December
12
Dec
2
02
2009
10:42 AM
10
10
42
AM
PDT
CJYman at 40, On a slightly separate topic you raised... Mustela Nivalis: “the mechanisms identified by modern evolutionary theory do not create things like genomes de novo — they do so incrementally and therefore cannot be modeled so naively.” The point is that according to Marks and Dembski’s recent work on active information, it is just as improbable to find something such as CSI as it is to generate a set of laws that would create an evolutionary algorithm to produce an incremental pathway to that CSI in question. But we don't need to find a set of laws. Physics, and therefore chemistry, is a given. Modern evolutionary theory attempts to explain what we observe, with the physical laws that exists. You might be able to construct an argument for cosmological ID based on "search for a search", but if you want to apply CSI to real biological systems, you have to take known evolutionary mechanisms into account.Mustela Nivalis
December 2, 2009
December
12
Dec
2
02
2009
10:37 AM
10
10
37
AM
PDT
CJYman at 40, Thank you for the detailed reply. I'm going to have to start with just a couple of your steps where I have some confusion. Mustela Nivalis: “Given its importance in discussions of ID, surely someone here must have an example of calculating CSI, as described in No Free Lunch for an actual biological object of some sort?” First, according to Dr. Dembski, a specified event is an event which can be formulated as an independent pattern. I've read that, but it seems very difficult to apply. What does it mean for a pattern to be independent? What is an example of a dependent pattern? Can't anything be described by a pattern separate from itself? Thus, a specified pattern/event relationship can take the form of a function where f(pattern)=event. Are you using this mathematical notation formally or informally? If formally, could you explain the nature of the function f? So, if we have an event such as a folding and biologically useful protein, A protein is an event? I apologize if this is a simple question, but I have read a fair bit of ID material and I find this terminology confusing. Can you rephrase it in more standard biological terms? and an independent pattern such as a stretch of DNA, Again, I apologize if I appear pedantic, but why would we consider the DNA that encodes for a protein to be an "independent pattern"? What is the precise definition you're using? then we can calculate for specificity, and then calculate the probability of the event given a uniform probability distribution. Why a uniform probability distribution? We know from observation that many mechanisms identified by modern evolutionary theory are not random in their behavior (mutation is assumed to be, but selection, for example, is not). The uniform probability distribution assumption is equivalent to the assumption that the DNA arose in its present form de novo. That does not correspond to either modern evolutionary theory nor to observed evolutionary mechanisms. I'll stop here to give you a chance to clarify my understanding. I do appreciate your willingness to discuss this.Mustela Nivalis
December 2, 2009
December
12
Dec
2
02
2009
10:37 AM
10
10
37
AM
PDT
And one more thing, Zachriel, it would make things worse for the critic if it actually was the ACTG and not just the hydrophobic/hydrophilic nature of amino acids that affects folding. Furthermore, in actuality, the probabilistic resources are vastly worse off than I allowed for, since not every atom in the universe has been constantly searching through protein space for 15 billion years. So, yes, calculations can be updated and revised in either direction at any time. But again, that happens all the time in science (ie: the age and size of the universe which I am constantly referring to), and that is what separates science from dogma.CJYman
December 2, 2009
December
12
Dec
2
02
2009
10:36 AM
10
10
36
AM
PDT
Zachriel: "I’ve already shown where changing your state of knowledge changes the result by a factor of 2 ^34,350. Nothing about the object of study has changed, only that you have possibly learned something new." Which could easily apply to many things such as the age and size of the universe, or whether it even had a beginning or not. So go ahead and re-calculate and see if you arrive at a non-CSI value. This is great, isn't it Zachriel ... we get to do ID related research together. So how has your re-calculation helped to make the value more accurate?CJYman
December 2, 2009
December
12
Dec
2
02
2009
10:22 AM
10
10
22
AM
PDT
Zachriel: "Most proteins are much shorter. And once you have short, functional proteins, longer ones can evolve." For the second time in one "conversation" you refuse to read what I write. I'm beginning to think that you haven't changed at all. It's all about the abfuscation isn't it Zachriel? I already stated that: "The point is that according to Marks and Dembski’s recent work on active information, it is just as improbable to find something such as CSI as it is to generate a set of laws that would create an evolutionary algorithm to produce an incremental pathway to that CSI in question. IOW, it is just as improbable that a human brain would randomly generate itself from a pool of its constituent material, as it is that an evolutionary pathway for the human brain would result from a “by chance” fine tuning of laws and initial conditions." P.S. I hate having to repeat myself -- it wastes my time -- which is why I haven't engaged in conversation with you in a while, Zachriel.CJYman
December 2, 2009
December
12
Dec
2
02
2009
10:18 AM
10
10
18
AM
PDT
CJYman: Any measurement and indeed all of science and the results we obtain are influenced by our ignorance.
But it's not based on human ignorance. I've already shown where changing your state of knowledge changes the result by a factor of 2 ^34,350. Nothing about the object of study has changed, only that you have possibly learned something new.Zachriel
December 2, 2009
December
12
Dec
2
02
2009
10:18 AM
10
10
18
AM
PDT
.... Oh look at that, Zachriel you get the opportunity to engage in ID research.CJYman
December 2, 2009
December
12
Dec
2
02
2009
10:11 AM
10
10
11
AM
PDT
Zachriel: "It’s not flawed because it is based on the current state of human knowledge, it’s flawed because it is directly dependent on human ignorance. The more ignorant you are, the higher the CSI determined." Any measurement and indeed all of science and the results we obtain are influenced by our ignorance. I have already provided an answer. as usual, you refuse to read through the whole of what someone states. "Of course some critics have stated that this method is flawed since it is based on our present ever changing knowledge of different quantities being measured in the formula. But, of course that is what makes CSI a part of science since it is able to be updated when more information comes in. It merely presents us with a measurement based on the present state of the evidence/data. Furthermore, that “problem” is also the problem of any measurement. Think about measuring the age and size of the universe and how it is open for updating to provide greater accuracy or even complete revision upon new evidence/data. So, go ahead and re-calculate. How far up the heirarchy of complexity that I discussed in #41 and #43 can you take the measurement without finding CSI? Yes, that is a challenge.CJYman
December 2, 2009
December
12
Dec
2
02
2009
10:10 AM
10
10
10
AM
PDT
CJYman: At 1 in 10^10, number of functional 34,350 aa long proteins would be 2.4 e10,330.
Most proteins are much shorter. And once you have short, functional proteins, longer ones can evolve.Zachriel
December 2, 2009
December
12
Dec
2
02
2009
10:10 AM
10
10
10
AM
PDT
... and that's not even considering conscious system. If probabilistic resources of 10^150 are not enough to generate one protein by a chance assemblage of laws, then riddle me this ... how do you expect to get to the seeming complexities required for intelligence and then consciousness by a chance assemblage of laws with probabilistic resources of only 10^150? ... Oh, and one thing I didn't mention is that utilizing all 10^150 probabilistic resources assumes that there are that many attempts to produce the event in question. IOW, I assumed that every point on our universe has been working on attempting to generate the protein Titin for 15 billion years. And remember, there are "only" 10^80 atoms in our observable universe.CJYman
December 2, 2009
December
12
Dec
2
02
2009
10:04 AM
10
10
04
AM
PDT
CJYman: - Probability of attaining all one-handedness = 1/2 ^34,350
Except that if we find a simple physical explanation for homochirality, then it suddenly becomes 1/1. Then you're only off by a factor a gazillion or two. Bailey et al., Circular Polarization in Star- Formation Regions: Implications for Biomolecular Homochirality, Science 1998. Breslow and Cheng, On the origin of terrestrial homochirality for nucleosides and amino acids, PNAS 2009.
CJYman: Of course some critics have stated that this method is flawed since it is based on our present ever changing knowledge of different quantities being measured in the formula.
It's not flawed because it is based on the current state of human knowledge, it's flawed because it is directly dependent on human ignorance. The more ignorant you are, the higher the CSI determined.Zachriel
December 2, 2009
December
12
Dec
2
02
2009
09:58 AM
9
09
58
AM
PDT
... and that's only the CSI of one protein (mind you, the largest biologically relevant one, but still one protein). Now, the next question is ... "would the CSI skyrocket once we start considering multiple protein interactions, systems of multiple proteins, organs, organisms, and ultimately intelligent organisms?"CJYman
December 2, 2009
December
12
Dec
2
02
2009
09:56 AM
9
09
56
AM
PDT
Mustela Nivalis: "Given its importance in discussions of ID, surely someone here must have an example of calculating CSI, as described in No Free Lunch for an actual biological object of some sort?" First, according to Dr. Dembski, a specified event is an even which can be formulated as an independent pattern. Thus, a specified pattern/event relationship can take the form of a function where f(pattern)=event. So, if we have an event such as a folding and biologically useful protein, and an independent pattern such as a stretch of DNA, then we can calculate for specificity, and then calculate the probability of the event given a uniform probability distribution. Then we can multiply the two together and compare the probability of arriving at any folding and biologically useful specified protein of same length to the probabilistic resources available. Probabilistic resources are measured as the max number of bit operations available within the observable universe at the time of the generation of that event. These probabilistic resources are relevant because only bit operations within the event's light cone can potentially have an effect on that specified event in question -- if the state of those bit operations can indeed not travel faster than the speed of light. - So, let's look at Titin (34,350 amino acids [aa]). I am going to give as much of the benefit of the doubt to the critics when calculating probabilites. - Assume prebiotic soup, rich in all amino acids. - Probabilistic resources have been calculate by Dembski as 10^150, and I do believe Seth Lloyd has arrived at a similar figure. - Probability of forming all peptide bonds = 1/2 ^34,350 - Probability of attaining all one-handedness = 1/2 ^34,350 - Assume only hydrophilic or hydrophobic nature of amino acids is important for protein function (as opposed to actual ACTG state) in order to make things even easier for chance, then probability of aligning all amino acids in correct configuration = 1/2 ^34,350 -Assuming extremely high ratio of functional to non-functional polymers @ 1 in 10^10 (taken from a critic of ID citing ratio of functional space when dealing with 100 amino acid polymers, and also admitting that according to the method used there could be a greater amount of non-biologically relevant polymers in the space. Doug Axe has estimated the ratio to be between 1 in 10^60 and 10^80 from what I can remember). At 1 in 10^10, number of functional 34,350 aa long proteins would be 2.4 e10,330. CSI = -log2[M*N*s(T)*P(T|H)]>1 CSI = -log2[probabilistic resources * number of specified events of same probability utilizing the same states * probability of arriving at specified event given a uniform probability distribution (chance hypothesis)]>1 -Skipping over a bit of the basic math: CSI = -log2[10^150 * 2.4e10,330 * 1/2 ^103,050] >1 -Skipping more basic math: CSI = -log2[1.7e-20,541]>1 -Conclusion: Because we end up taking the -log2 of a very small fraction, we know that the answer will be a large positive number; we have a very large amount of CSI. Of course some critics have stated that this method is flawed since it is based on our present ever changing knowledge of different quantities being measured in the formula. But, of course that is what makes CSI a part of science since it is able to be updated when more information comes in. It merely presents us with a measurement based on the present state of the evidence/data. Furthermore, that "problem" is also the problem of any measurement. Think about measuring the age and size of the universe and how it is open for updating to provide greater accuracy or even complete revision upon new evidence/data. Mustela Nivalis: "the mechanisms identified by modern evolutionary theory do not create things like genomes de novo — they do so incrementally and therefore cannot be modeled so naively." The point is that according to Marks and Dembski's recent work on active information, it is just as improbable to find something such as CSI as it is to generate a set of laws that would create an evolutionary algorithm to produce an incremental pathway to that CSI in question. IOW, it is just as improbable that a human brain would randomly generate itself from a pool of its constituent material, as it is that an evolutionary pathway for the human brain would result from a "by chance" fine tuning of laws and initial conditions. Furthermore, if you wish to throw multiple universes at the problem, then you'll first have to come up with some non-arbitrary rules for when multiple universes can be invoked to solve a problem. For if there is no non-arbitrary rule for such "chance of the gaps" problem solving, then the multiple universes could be invoked to explain away anything and everything including lawlike behaviour and science comes to an end. IOW, what would be a *better explanation* for the instructions sent in the movies "Contact" ... an infinite chance based search through the radio wave probability space or some type of previous intelligence (a system utilizing foresight to generate a target in the future that does not yet exist and then manipulate matter and energy in the present to achieve that goal)?CJYman
December 2, 2009
December
12
Dec
2
02
2009
09:38 AM
9
09
38
AM
PDT
niwrad at 33, I think what you have in mind has a lot to do with the works of Dembski and Marks at the Evolutionary Informatics Lab: http://www.evoinfo.org/ Unfortunately, there is no rigorous definition of CSI implemented in software there that I've found. "The papers you reference do not use the definition from No Free Lunch. In fact, their measurements are little more than computing 2 to the power of the length of the genome under consideration. That’s nothing like how CSI is described." Sorry but I don’t understand why you say that 2 to the power of the length n of the genomic string is nothing like CSI. It represents the complexity of the sequence (in fact the value p=1/2^n is the probability to occur). The specification is represented by its functionality. The information in bit is I = log(1/p). There are two problems with using 2 to the power of the length of the genome in this case. The first is that it doesn't correspond to the description of CSI in No Free Lunch which requires an explicit specification. The second is that the mechanisms identified by modern evolutionary theory do not create things like genomes de novo -- they do so incrementally and therefore cannot be modeled so naively. Given its importance in discussions of ID, surely someone here must have an example of calculating CSI, as described in No Free Lunch for an actual biological object of some sort?Mustela Nivalis
December 2, 2009
December
12
Dec
2
02
2009
08:14 AM
8
08
14
AM
PDT
niwrad, a few more points: My original question was whether other ID proponents claim that CSI is non-computable. Why has Dembski not mentioned this significant point? The fact is that CSI is necessarily computable, by definition. Non-computability entails infinite descriptive complexity, which, according to Dembski's definition, means zero specified complexity.R0b
December 1, 2009
December
12
Dec
1
01
2009
02:30 PM
2
02
30
PM
PDT
1 2 3 4

Leave a Reply