Uncommon Descent Serving The Intelligent Design Community

The cause of incompleteness

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In a previous post I promised to start at UD a discussion about the incompleteness of physics from an ID point of view. Here is the startup article.

At Aristotle’s time “physics” was the study of nature as a whole. In modern times physics seems to have a more specific meaning and is only one field among others that study nature. Nevertheless physicists (especially materialist ones) claim that physics can (or should) explain all reality. This claim is based on the gratuitous assumption that all macroscopic realities can be deduced entirely from their microscopic elements or states. Also if this assumption were true there would be the problem to understand where those fundamental objects or states came from in the first place. Many physicists even think about a “Theory of Everything” (ToE), able to explain all nature, from its lower aspects to its higher ones. If a ToE really existed a system of equations would be able to model every object and calculate every event in the cosmos, from atomic particles to intelligence. The question that many ask is: can a ToE exist in principle? If the answer is positive we could consider the cosmos as a giant system, which evolution is computable. If the answer is negative this would mean that there is a fundamental incompleteness in physics, and the cosmos cannot be considered a computable system. An additional question is: what are the relations between the above problem and ID?

Stephen Hawking in his lecture “Gödel and the end of physics” seems to think that Kurt Gödel’s incompleteness theorems in metamathematics can be a reason to doubt the existence of a ToE:

“Some people will be very disappointed if there is not an ultimate theory, that can be formulated as a finite number of principles. I used to belong to that camp, but I have changed my mind. I’m now glad that our search for understanding will never come to an end, and that we will always have the challenge of new discovery. Without it, we would stagnate. Gödel’s theorem ensured there would always be a job for mathematicians. I think M-theory will do the same for physicists. I’m sure Dirac would have approved.”

In two words, Gödel’s incompleteness theorems essentially say that in general a mathematical formal system beyond a certain complexity is either inconsistent or incomplete. Hawking’s reasoning is something like this: every physical theory is a mathematical model, and since, according to Gödel’s incompleteness theorems, there are mathematical results that cannot be proven, then there must be physical statements that cannot be proven as well, including those contained in a ToE. Gödel’s incompleteness applies to all mathematical theories with potentiality greater or equal to arithmetic. Since any mathematically described physical theory has potentiality greater than arithmetic then is necessarily incomplete. So we are before a fundamental impossibility of a complete ToE that comes from results in metamathematics.

Computability theory and its continuation Algorithmic Information Theory (AIT) are mathematical theories that can be considered sort of meta-informatics, because are able to prove statements about algorithms and their potentiality, what they can or cannot output. A basic concept of AIT is compressibility: an output that can be generated by a computer program with binary size much lesser than the output itself is called “compressible” or “reducible”. Given that a mathematical formal system and its theorems are comparable to an algorithm and its outputs we find that incompleteness in math (improvable theorems do exist in a formal system) has its equivalence in incompressibility in AIT (non algorithmable incompressible outputs do exist). For these reasons by mean of the tools of AIT it is possible to prove theorems equivalent to Gödel’s theorem. According to Gregory Chaitin (the founder of AIT):

“It is sometimes useful to think of physical systems as performing algorithms and of the entire universe as a single giant computer” (from “Metamathematics and the foundations of mathematics”). – “A theory may be viewed as a computer program for calculating observations. This provides motivation for defining the complexity of something to be the size of the simplest theory for it, in other words, the size of the smallest program for calculating it” (from “On the intelligibility of the universe and the notions of simplicity, complexity and irreducibility”).

A physical theory is composed of laws (i.e. algorithms). If the universe is a giant computer then the incompressibility results of AIT apply: incompressible outputs do exist, which no algorithm can create, then a complete physical theory describing those outputs does not exist. If the universe is not a giant computer then a complete physical theory describing it does not exist for definition. In both cases we arrive to the incompleteness of physics. The conclusions of Chaitin are somewhat similar to Hawking’s ones:

“Does [the universe] have finite or infinite complexity? The conventional view on this held by high-energy physicists is that a ToE, a theory of everything, a finite set of laws of nature that we may someday know, which has only finite complexity. So that part is optimistic! But unfortunately in quantum mechanics there is randomness. God plays dice, and to know the results of all God’s coin tosses, infinitely many coin tosses, necessitates a theory of infinite complexity, which simply records the result of each toss!” (“From Philosophy to Program Size” 1.10)

About the infinite complexity Chaitin is correct. But the language of Chaitin is a bit misleading where he says that “God plays dice”. In reality also all apparently random results are wanted by God. Otherwise His will would be limited by dice and this is nonsense. Also randomness is under the governance of God. To deny this would mean to deny its Omnipotence, then deny the Total Possibility (which is another name for what theology calls God’s Omnipotence). From this point of view any result that appears random to us is simply an event which unique cause is directly God himself (the First Cause), while a result due to a physical law is an event wanted by the Laws-Giver too obviously, but by mean of an intermediary law (which works as secunda causa). So events divide in two sets: those wanted by God not compressible into laws and those wanted by God compressible into laws. After all there is no reason to believe that God should limit himself to the latter only.

There is another point of view from which a physical ToE is incomplete. We might call this argument, “physics-incompleteness argument from ID”. If a ToE must be indeed what it wants to be, that is a theory describing all aspects of reality, this ToE must also deal with its higher aspects, those related to intelligence. But intelligence is what creates theories. In fact a ToE is an intelligent design and physicists who develop it are designers. A ToE is incapable to compute the decisions of its designer. Said other way, the free will of the designer of a ToE entirely transcends it. You can also look at the problem from this point of view: if a physicist decides to modify the ToE, the ToE cannot account for it, because it is impossible that a thing self-modifies. As a consequence, since a ToE doesn’t compute all things in the universe, is incomplete and not at all a ToE.

To sum up we have that metamathematics proves the incompleteness of math. AIT proves the incompressibility of informatics. Both these results reverberate on physics causing its irreducibility. In turn ID shows that a ToE is incomplete because cannot compute its designer. These three fields agree to show the incompleteness of physics and compose a final consistent scenario.

The important thing to get is that all incompleteness results in specific fields are only particular cases of a more general truth. To understand it we must start from the fundamental concept of the aforesaid Total Possibility, which has no limits because leaves outside only the impossible and the absurd that are pure nothingness. For this reason, the Total Possibility is not reducible to a system. In fact any defined system S leaves outside all what is ‘non S’. This ‘non S’ limits the system S. Since S has limits it cannot be the Total Possibility, which is unlimited. As Leibniz said: “a system is true for what affirms and false for what denies”. Also large-enough sub-sets of the Total Possibility are not reducible to systems. For Gödel “large-enough” means with potentiality greater or equal to arithmetic. Mathematics and the cosmos are large-enough in this sense and as such are irreducible to systems. They are simply too rich to be compressed in a system because they are aspects or functions of the Total Possibility. The Total Possibility has nothing to do with the simple infinites (mathematical or of other different kinds). Any particular infinite has its own limits, in the sense that leaves outside something (e.g. the infinite series of numbers doesn’t contain what was before the Big-Bang, galaxies, elephants, your past and future thoughts, what will remain when the universe will collapse … while Total Possibility does). While there is only one Total Possibility there are many infinites, which are infinitesimal compared to it. To confuse the two concepts, Total Possibility and infinites, is a serious error and cause the total misunderstanding of what the former is.

Systematization (the reduction or compression to a system) represents epistemologically also all the bottom-up approaches to get the total knowledge. The fundamental failure of systematization, when applied to rich-enough sub-sets of the Total Possibility is also the failure of all bottom-up reductionist and positivist approaches to knowledge. Of course this failure appears negative only for who hosts the naive illusion that more comes from less. For who understands the Total Possibility, the failure in principle of systematization is only a logical consequence of the fact that less always comes from more.

To use a term from the computers jargon that all people understand, mathematics and the cosmos are “windows” on the Total Possibility. As a window-shell on our display is an aperture on the operating system of our computer and allows us to know something of it, analogously mathematics and the cosmos are large-enough apertures on the Total Possibility. This is sufficient to make them not systematizable. This is true also for the cosmos despite the fact it is infinitesimal respect the Total Possibility. It is easy to see that such “window” symbol is equivalent to the symbolism of “Plato’s cave”, from which the prisoners can see only the shadows of the realm of Ideas or Forms (Plato’s equivalent of the eternal possibilities contained in the Total Possibility). Plato, although he sure didn’t need scientific confirmations for his philosophy, would be glad to know that thousands years after him fundamental results in science support his correct top-down philosophical worldview.

Given its fundamental incompleteness, math implies the necessity of the intelligence of mathematicians for its endless study. Since informatics is basically incompressible, computers (and in general Artificial Intelligence) will never be able to fully substitute human intelligence. Given its fundamental incompleteness, physics implies the necessity of the intelligence of physicists for its endless development. In turn ID says exactly the same thing about complex specified information: its generation will always need intelligent designers. In a sense also the ID concept of irreducible complexity agrees with the above results: in all cases there is a “true whole” which richness cannot be reduced, indeed because it represents a principle of indivisible unity. The final victory of ID against evolutionism will be only the unavoidable consequence of the fact that the former is a correct top-down conception while the latter is a bottom-up illusion. Bottom-up doesn’t work for the simple fact that reality is an infinite hierarchy of information layers, from the Total Possibility all the way down until to the more infinitesimal part of the cosmos.

A believer asked God: “Lord, how can I approach You?” God answered: “By mean of your humility and poverty”. May be in this teaching there is a message for us about our actual topic (a message that Gödel, Hawking and Chaitin seem just to have humbly acknowledged): indeed by recognizing the radical incompleteness (“humility and poverty”) of our systems, we have a chance to understand the “Infinite Completeness” of God.

Comments
I am interested in the effective calculability and solution of problems. You seem to be interested in sort of illusory and abstract calculability of them. As a consequence I fear we will never converge to an agreement.
You put forth an argument based on incompleteness and computability theory, neither of which deal with effective calculability and solution of problems. Regardless, this isn't a question of our respective interests, it's a question of whether your claims wrt computability are true or false. Computability is a well-defined mathematical concept, so it's not subject to opinion. At least one of us is simply wrong.
This doesn’t mean that our discussion has been unuseful and I thank you for your active participation.
I thank you too for your graciousness.R0b
November 20, 2009
November
11
Nov
20
20
2009
09:00 AM
9
09
00
AM
PST
R0b #53 You seem to believe that any problem finitely defined on a finite number of objects can be resolved by mean of a finite series of instructions or operations (computation or algorithm). Really I don’t understand what your believe is based on, considered the huge range the concept of "problem" covers. This is even more unbelievable for me given you have rightly stated that "virtually all problems are non-computable" (by the way this perfectly agrees with the general thrust of my OP).
If you think that tomorrow’s winning lottery number is non-computable, you misunderstand what computability is all about.
How you can claim that tomorrow’s winning lottery number is obtainable by mean of a series of instructions is beyond me. TMs cannot know the future. I am interested in the effective calculability and solution of problems. You seem to be interested in sort of illusory and abstract calculability of them. As a consequence I fear we will never converge to an agreement. Anyway these kinds of situation are typical when an evolutionist (you) and an IDer (me) discuss: the former inclines to oversimplify and reduce things while the latter inclines to see the things from an engineering resolutive viewpoint. This doesn’t mean that our discussion has been unuseful and I thank you for your active participation.niwrad
November 20, 2009
November
11
Nov
20
20
2009
12:15 AM
12
12
15
AM
PST
By mean of your method of the hardwired values there is no incomputable problem.
That's incorrect. You can't hardwire answers to the Halting problem because a TM is a finite automaton and the Halting problem has an infinite domain. And there are an uncountably infinite number of problems that are Turing equivalent to the Halting problem. In contrast, the number of computable problems is countably infinite. Which means that virtually all problems are non-computable.
Also the problem to know the future outcomes of the lottery become computable.
Absolutely. If you think that tomorrow's winning lottery number is non-computable, you misunderstand what computability is all about.
You seem to believe that all finite problems are computable. I provided a finite problem but you didn’t compute it.
But I explained why there is a TM that computes it.R0b
November 19, 2009
November
11
Nov
19
19
2009
08:40 AM
8
08
40
AM
PST
R0b #51 By mean of your method of the hardwired values there is no incomputable problem. Also the problem to know the future outcomes of the lottery become computable. You provide us a TM with the hardwired answers. Your method is too good to be true. Computable functions are only a sub-set of functions/problems. You seem to believe that all finite problems are computable. I provided a finite problem but you didn’t compute it.niwrad
November 18, 2009
November
11
Nov
18
18
2009
11:34 PM
11
11
34
PM
PST
niwrad:
The problem is indeed to know if a TM can compute the answers from data different from them and without having them hardwired inside itself.
Actually, no. The definition of computability does not disqualify TMs with hardwired answers. You can search any computing theory text and you will find no definition of computability that matches your understanding of the term. Nor will you find any examples of non-computability that are finite-domain functions.R0b
November 18, 2009
November
11
Nov
18
18
2009
10:18 AM
10
10
18
AM
PST
R0b #49
The TM determines the answer by looking it up in the table, which is incorporated in the TM. The question of how the table got populated with the correct answers — i.e. how the TM was made — is irrelevant to the computability issue.
Sorry but disagree. It is a tautology to say that if I insert the answers in a TM then the TM outputs them. The problem is indeed to know if a TM can compute the answers from data different from them and without having them hardwired inside itself. This is relevant to the computability issue.niwrad
November 18, 2009
November
11
Nov
18
18
2009
07:50 AM
7
07
50
AM
PST
niwrad:
The problem is not there on that finite lookup table about which we agree, the problem is that any single row of it, i.e. the single attempt to know mechanically a single value of f (say y2=f(x2)) necessarily fails.
The TM determines the answer by looking it up in the table, which is incorporated in the TM. The question of how the table got populated with the correct answers -- i.e. how the TM was made -- is irrelevant to the computability issue. The only relevant question is whether some TM, out of the space of all possible TMs, implements the function.
Before a string on a computer screen we cannot a priori know by mean of a computation if it was written by a guy hitting on a keyboard or by a TM.
That's like saying that a TM can't say whether a given shirt came from Macy's or JCPenney. But of course a TM can do this. A TM can contain any information whatsoever, as long as it's finite. If the information contained in the history of the universe is finite, as Dembski argues, then a TM can "know" everything there is to know about the physical history of the universe.R0b
November 17, 2009
November
11
Nov
17
17
2009
10:02 AM
10
10
02
AM
PST
R0b #47
Okay, so the argument is not the string itself, but rather information about a certain physical instantiation of the string. No problem. If the function is well-defined and the domain is finite, f(x) can be implemented with a lookup table. This is not a controversial statement. If we can’t agree on that, then we mean different things by the term “computability” and we have no foundation for a discussion.
I don’t think we are really on a different page about computability. Perhaps the misunderstanding is the following. Given for f(x) a domain {x1, x2, ... xn} and the codomain {1, 0} the lookup table has n rows: y1=f(x1); y2=f(x2); ... yn=f(xn). The problem is not there on that finite lookup table about which we agree, the problem is that any single row of it, i.e. the single attempt to know mechanically a single value of f (say y2=f(x2)) necessarily fails. It is this failure that makes me say that f(x) is incomputable, not the lookup table per se. Before a string on a computer screen we cannot a priori know by mean of a computation if it was written by a guy hitting on a keyboard or by a TM. It is this impossibility that my function describes.niwrad
November 17, 2009
November
11
Nov
17
17
2009
01:34 AM
1
01
34
AM
PST
My function f(x) needs as argument a single specific string written somewhere.
Okay, so the argument is not the string itself, but rather information about a certain physical instantiation of the string. No problem. If the function is well-defined and the domain is finite, f(x) can be implemented with a lookup table. This is not a controversial statement. If we can't agree on that, then we mean different things by the term "computability" and we have no foundation for a discussion.R0b
November 16, 2009
November
11
Nov
16
16
2009
03:49 PM
3
03
49
PM
PST
R0b #44
Does f(”Hello”) have a unique value?
My function f(x) needs as argument a single specific string written somewhere. Only in internet there are 382 millions "hello". What of them do you specify? If you don’t specify the particular "hello" f(x) is not valid because x is not univocal. If x is univocal f(x) has a unique value.niwrad
November 16, 2009
November
11
Nov
16
16
2009
01:36 PM
1
01
36
PM
PST
Timothy V Reeves, about what you wrote I have no serious objections by now. However the argument of fractals, equations, algorithms, etc. vs. information, which you put on the table is so interesting that is worth of a specific UD article I am going to put in my agenda. At UD I always try to separate different arguments in different discussions to be more focused and reader-friendly as possible. Please continue to frequent UD and I am sure we will have other nice discussions. It would be a pity if UD loses a commenter as you. Thank you.niwrad
November 16, 2009
November
11
Nov
16
16
2009
01:23 PM
1
01
23
PM
PST
niwrad, does f("Hello") have a unique value?R0b
November 16, 2009
November
11
Nov
16
16
2009
06:45 AM
6
06
45
AM
PST
Anyway persevering with the concept of information, here is my reply to some of your points. ONE) You say About your class of fully connected forms it seems to me a system already very complex. Yes it is very complex, I’m not denying that. And I certainly agree that Irreducible Complexity elevated to the level of some “catch all” principle would cast grave doubts about the workability of conventional notions of evolution. TWO) Clearly, random agitation doesn’t degrade the “physical constraints” imposed on the actual “material stuff” of a system whether these constraints have been put in by hand or by equation; they reside on a kind of meta level above and beyond thermal agitation. If evolution is to work then it is the information contained in these constraints that does the “heavy lifting” (to use an expression I have seen on UD) unaffected by thermodynamic agitation. The thermodynamic agitation has the effect of facilitating an exploration of the space of possibility, a space limited and narrowed by the “physical constraints”, constraints whose integrity remains untouched by decay. But, of course, it’s one thing to speculate about the possible mathematical existence of physical constraints that so narrow the space of possibility as to considerably enhance the chances of life evolving, but it is quite another to assert that the particular physics of our universe is one such system of constraints; if the physical constraints are too slack the resulting unharnessed thermal agitation is simply a destructive force. THREE) On the piece you said was a hard passage: Whatever the details here I think we agree on the essential idea of intelligence (or at least some kind of a-priori complexity) ultimately being the source of information. The real issue is how that intelligence applies that information. In a nutshell what I was trying to say is this: If we claim some event complex to have a high probability we are in fact claiming that it has a high conditional probability; that is P(Event|Condition) ~ 1. The high probability (with a concomitant loss of information) is presumably gained at the expense of a “condition” which then bears the low probability. But if it is claimed that this condition has a high probability, then this high probability in turn is gained at the expense of yet another low probability condition, call it condition 2. That is, P(condition|condition 2) ~1 where condition 2 now bears the low probability. And so on to condition n. This, I think, is basically Dembski’s concept of the “conservation of information” that he explores more rigorously in his papers. FOUR) You say Anyway fractals are not examples of gratis creation of information. I agree, but my reason for agreeing is this: The information effectively resides in the fractal algorithm itself because being taken from a presumably a large space of possibilities it therefore has a high improbability (Assuming equal a-priori probabilities). As I have said it is wrong to attribute unconditional probabilities of 1 to fractal calculations. SIX) You say If with “to distribute persistence probabilities” you mean “to create CSI”, then this creation cannot be obtained by equation. Yes and No. “Yes” because as I have said, if it is possible to distribute life enhancing persistence probabilities using succinct equations then the improbability is found in the choice of equations because they entail a rare (i.e improbable) mapping of a “fast time” algorithm to a complex pattern, thus shifting the information to the equations selected; the equations would not create the information, but rather be the bearer of that information. “No” because unlike yourself my thinking has not yet reached a stage where I can confidently claim that there are no fast-time maps from some succinct systems of equations to the required distribution of persistence probabilities. Although in the far greater majority of cases complex structures are effectively incompressible strings, there is a small class of complex forms that do map to fast time succinct (i.e. compressed) algorithms; as we know a small number of complex and disordered sequences can be generated in fast time by a relatively small algorithm. Of course I’m not then claiming that this is any strong reason to contradict you view that equations are not enough to implicitly embody life’s complexity (it merely sets a precedent) and therefore I look forward to your postings on the subject.Timothy V Reeves
November 16, 2009
November
11
Nov
16
16
2009
04:58 AM
4
04
58
AM
PST
Hi Niwrad, Thanks very much for a careful consideration of my points. Sorry to keep banging on about this, but I am simply using this as an opportunity to articulate my position. I have endeavored to use the term “information” in order to try and maintain compatibility with the ID community’s concepts, but I sometimes find it a rather slippery and awkward term. Part of the problem is that measures of “information” bundle the observer and the system observed into a joint system and the observer himself becomes a variable in that system with the potential to be depository of “hidden” information. For example, consider the case of an algorithm that generates a highly disordered sequence or a complex fractal. To the uninitiated the pattern is very information rich because each bit of the pattern has some surprise element and thus is able to inform. However, if the observer should learn the algorithm the same pattern is no longer informative; the observer can predict each bit. What then has happened to the “information” in the pattern? The pattern hasn’t changed, so what has changed? My reading of the situation is that the change is in the observer; an improbable pattern in the form of an algorithm has now been implanted in the observer’s head. The same thing happens with a sequence of coin tosses: From the outset the sequence is information rich, but as soon as the observer sees and learns the sequence the sequence is no longer able to inform and thus loses its information. Some of the confusion seems to trace back to use of the rubric “chance and necessity”. I much prefer my own rubric “law and disorder” because so called “necessity” is not necessarily necessary and so called “chance” may be more necessary than we think. Consider again the Mandelbrot set. For those initiated into the algorithm, each bit of the set has a probability of 1 and thus seems to classify as “necessity”. But this necessity is conditioned on the use of the Mandelbrot algorithm. Hence, we have in fact P(bit|mandlebrot)=1 (and not P(bit)=1) and therefore because the algorithm has been chosen from who knows what huge space of possible algorithms, “necessity” should read as “conditional necessity”. If we use Dembski’s assumption of equal a-priori probabilities, then taken from the point of view of someone who doesn’t know what algorithm is being used, so called “necessity” suddenly snaps over into highly informative improbability. This apparent appearance, disappearance and reappearance of information can make information a very frustrating concept to use. Another little issue I have with information as metric is that its use of the log operator results in a differentiated product of probabilities being lumped into single undifferentiated sum, with the consequence that “information” is a metric which is not very sensitive to configurational form. Yet another issue is this: When one reaches the boundary of a system with “white spaces” beyond its borders how does one evaluate the probability of the systems options and therefore the system’s information? Is probability meaningful in this contextless system? I am inclined to follow Dembski’s approach here of using equal a priori probabilities, but I am sure this approach is not beyond critique.Timothy V Reeves
November 16, 2009
November
11
Nov
16
16
2009
04:55 AM
4
04
55
AM
PST
R0b #40 If x in f(x) is your 280 bytes answer in your comment #29 there are three possible cases: (1) you officially declare to be the writer of x, then f(x)=0 because x is not written by a TM (rather by a guy named R0b). I just said that your declaration would not be a computation, rather an entirely different thing; (2) x is written by a TM (or other mechanic system), then f(x)=1. To inference this we must be witnesses of this mechanic writing. Again our testimony would not be a computation. (3) this is the "else" clause in the control flow. You don’t declare to be the writer of x and we are not witnesses of its TM generation. In this "else case" f(x) is incomputable because it is impossible a priori to decide its Boolean value. Notice that also in the cases #1 and #2 f(x) is non properly computed because its values are found by mean of actions different from computation. It is true, as you say, that the question "is any finite-length string computable?" has answer "yes". In fact in the worst case (if the string is incompressible – in the sense of algorithmic information theory) at least it exists the trivial program "a=...; print a;" able to output it. But the question that f(x) must answer is another one: "is the 280 bytes answer in R0b’s comment #29 written by a TM or not?". Such f(x) is incomputable.niwrad
November 15, 2009
November
11
Nov
15
15
2009
08:08 AM
8
08
08
AM
PST
niwrad:
It makes sense to ask if your answer x is computable or non-computable. That is exactly what f(x) does.
Okay, so f(x) tells us whether x is computable. Now I understand, hopefully. Assuming that x is a finite-length string, f(x) is always 1, because any finite-length string is computable. I'm going to stop here are see if we agree on this fundamental fact.R0b
November 13, 2009
November
11
Nov
13
13
2009
12:48 PM
12
12
48
PM
PST
R0b #37
Okay, I misunderstood your description of f(x) in [30]. You’re saying that f(x) tells us whether x is the output of any TM, not just a given TM. In this case, f(x) is not a valid function, as there is not a unique value of f(x) for each value of x. TMs, sub-TMs, and super-TMs can all output finite sequences. Since f(x) isn’t a well-defined function, it makes no sense to say it’s computable or non-computable.
Why do you say that f(x) has not a unique value for each value of x? It is Boolean: it has y=1 OR y=0 value. To have multiple values would mean that a given y=g(x), for a certain x, has say y=m AND y=n in the same time as values. They say such g(x) is not univocal. But this is not the case for my f(x). It makes sense to ask if your answer x is computable or non-computable. That is exactly what f(x) does.
You said: According to Turing, computability means “to be generable by a TM”. Then the question is: can a TM create a TM? My answer is “no” because a TM, respect its outputs, is a meta-concept of creation.You’re explicitly talking about Turing computability, and saying that a TM cannot create a TM. But it can.
It can syntactically but not semantically.
And you have yet to establish that premise [life/intelligence is non computable]. If, by non-computable, you mean something other than the established definition, then perhaps a different word would be more appropriate. As for me proving that life/intelligence is computable, the burden is on you to prove non-computability, since the premise is yours, not mine. But if I were to take your challenge, you would need to tell me your formal representation of life/intelligence in order for the proposition to even make sense.
In #26 I wrote that I would have written a specific article about intelligence and I will do. Unfortunately I have too work in the pipeline. This discussion between us is only somehow propaedeutic. Yes the premise is mine but the affirmation that “my application of computing theory isn’t correct” is yours and yet I don’t see valid arguments from you supporting your claim.niwrad
November 13, 2009
November
11
Nov
13
13
2009
12:46 AM
12
12
46
AM
PST
Timothy V Reeves, I appreciate your work to elaborate concepts from an ID point of view. Here are my comments about.
I’m’ not challenging IC here but I am simply making the opposite assumption of reducible complexity and seeing where it takes me. As far as you’re concerned the following is a counterfactual argument. Reducible complexity (a requirement of any form of evolution including “Darwinism”) demands at least the following conditions in morphospace:a) A class of forms across a wide spectrum of complexity that have a high probability of persisting.b) That this class is fully connected like a Mandelbrot set.
I agree that reducible complexity is a requirement of any form of evolution. In fact I am convinced that IC denies any evolution (the Darwinian one but also theistic evolution). About your class of fully connected forms it seems to me a system already very complex.
Given these conditions then random thermal agitation allows for a random walk of this connected set, with the network of persistence probabilities effectively acting as channels, depressions, wells and traps that have the effect of considerably enhancing the probability of this class of configurations by these configurations accumulating and damming up the “flow of probability”.
I don’t know if I follow you here, anyway for sure random thermal agitation doesn’t increase the information content of a single bit. It is more likely that random thermal agitation increases entropy and destroys information.
Now, there are two ways of assigning these persistence probabilities. One way is to simply put them in by hand on an item by item basis. This contrived method of assigning persistence probabilities is what, I think, ID theorists would identify as an obvious and explicit form of frontloading. (Dembski identified this form of frontloading in the Avida experiment in computational evolution. See See here; many thanks to Kairosfocus for alerting me to this paper)
No objection here.
In doing this job by hand the configurations selected for high persistence probability swap a very low probability for a relatively high probability thus effectively loosing information. These enhanced probabilities presumably result of conditions further back in the source doing the selection, conditions that give rise to this highly improbable distribution of persistence probabilities. The source bears the low probability and thus the information appears to “come from” the source (presumably intelligence in this case) that assigns the persistence probabilities.
Sorry, for me this is an hard passage. However what I like is information coming from an intelligent source.
There is however, to my mind, another conjectured way in which the source may assign persistence probabilities and thus bear the low probability. That source might distribute the persistence probabilities using a succinct mathematical function or functions and these functions constitute the “physics” of the system. Trouble is, it seems fairly clear that the persistence probabilities required to do the job would form a very complex pattern in morphospace. So the question is can such a pattern be defined by a set of relatively simple equations?
If with "to distribute persistence probabilities" you mean "to create CSI", then this creation cannot be obtained by equations.
Frankly I don’t know the answer to that question. We do know, of course, that there is a relatively small class of large complex patterns that can be generated in relatively fast time from elegant mathematics/short algorithms. e.g. fractals and highly complex disordered sequences. These complex forms with a fast-time map to relatively simple mathematical functions are very rare and so applying Dembski’s assumption of equal a-prior probabilities are thus able to bear the information burden.
Anyway fractals are not examples of gratis creation of information.
Hence at this stage in my thinking the following two questions are unsettled in my mind:a) Is it in principle possible to distribute persistence probabilities using succinct algorithms/mathematics?b) Is the physics of our world one of those succinct systems?As long as I remain unsure about the answer to these questions evolution is a “design” candidate. I’m agree there is nothing we know that obliges the configurations of this world to have high probabilities as it seems those configuration have been selected from an enormous space of possibility, thus implying some high information source. But I am in doubt about how that source burdens the information.
To a) my answer is always the same: there is information non compressible. To b) it is indeed the goal of my actual article to claim that "the physics of our world" is not a "succinct system", to use your own terms.
My comment about computational irreducibility is a pessimistic “what if” worst case scenario. If the computation from an elegant set of laws to a complex assignment of persistent probabilities is computationally irreducible then, as I have already suggested, analysis is going to be difficult and the burden will be on experiment to show the way. Trouble is, simply taking a few experimental samples isn’t going to prove much either way. If this is the case then it looks to me that the argument will run and run.
I agree that a lot of research work has to be done on these topics.niwrad
November 13, 2009
November
11
Nov
13
13
2009
12:00 AM
12
12
00
AM
PST
niwrad:
My function f is Turing incomputable because it is a priori impossible to know syntactically if your answer x (argument of f) is outputted e.g. by a text pseudo-random generator or a spam engine or whatever mechanic system, rather than a human mind.
Okay, I misunderstood your description of f(x) in [30]. You're saying that f(x) tells us whether x is the output of any TM, not just a given TM. In this case, f(x) is not a valid function, as there is not a unique value of f(x) for each value of x. TMs, sub-TMs, and super-TMs can all output finite sequences. Since f(x) isn't a well-defined function, it makes no sense to say it's computable or non-computable.
I didn’t claim that “TMs cannot output profound English text, other TMs, or any other finite output”.
You said: According to Turing, computability means “to be generable by a TM”. Then the question is: can a TM create a TM? My answer is “no” because a TM, respect its outputs, is a meta-concept of creation. You're explicitly talking about Turing computability, and saying that a TM cannot create a TM. But it can.
My premise was that life/intelligence is non computable.
And you have yet to establish that premise. If, by non-computable, you mean something other than the established definition, then perhaps a different word would be more appropriate. As for me proving that life/intelligence is computable, the burden is on you to prove non-computability, since the premise is yours, not mine. But if I were to take your challenge, you would need to tell me your formal representation of life/intelligence in order for the proposition to even make sense.R0b
November 12, 2009
November
11
Nov
12
12
2009
12:24 PM
12
12
24
PM
PST
Hi Niwrad, Let me attempt to expand a little on point 4 as promised (as briefly as I can). In promoting the notion of irreducible complexity the ID community has all but spilt blood at the hands of some very bloody minded people and so I understand their emotional investment in IC. No need to worry, I’m’ not challenging IC here but I am simply making the opposite assumption of reducible complexity and seeing where it takes me. As far as you’re concerned the following is a counterfactual argument. Firstly let me make the general observation that as far as human logic goes our particular cosmic physical regime seems to have been selected from an infinitely larger space of possibility. If we assume equal a-priori probabilities over this huge space (as I believe is Dembski’s practice) then the apparently contingent configurations and properties of our particular observable universe have an absolutely minute probability, thus displaying a high information content. (Using Dembski’s concept of information –log(p) ) Reducible complexity (a requirement of any form of evolution including “Darwinism”) demands at least the following conditions in morphospace: a) A class of forms across a wide spectrum of complexity that have a high probability of persisting. b) That this class is fully connected like a Mandelbrot set. Given these conditions then random thermal agitation allows for a random walk of this connected set, with the network of persistence probabilities effectively acting as channels, depressions, wells and traps that have the effect of considerably enhancing the probability of this class of configurations by these configurations accumulating and damming up the “flow of probability”. Now, there are two ways of assigning these persistence probabilities. One way is to simply put them in by hand on an item by item basis. This contrived method of assigning persistence probabilities is what, I think, ID theorists would identify as an obvious and explicit form of frontloading. (Dembski identified this form of frontloading in the Avida experiment in computational evolution. See See here; many thanks to Kairosfocus for alerting me to this paper) In doing this job by hand the configurations selected for high persistence probability swap a very low probability for a relatively high probability thus effectively loosing information. These enhanced probabilities presumably result of conditions further back in the source doing the selection, conditions that give rise to this highly improbable distribution of persistence probabilities. The source bears the low probability and thus the information appears to “come from” the source (presumably intelligence in this case) that assigns the persistence probabilities. There is however, to my mind, another conjectured way in which the source may assign persistence probabilities and thus bear the low probability. That source might distribute the persistence probabilities using a succinct mathematical function or functions and these functions constitute the “physics” of the system. Trouble is, it seems fairly clear that the persistence probabilities required to do the job would form a very complex pattern in morphospace. So the question is can such a pattern be defined by a set of relatively simple equations? Frankly I don’t know the answer to that question. We do know, of course, that there is a relatively small class of large complex patterns that can be generated in relatively fast time from elegant mathematics/short algorithms. e.g. fractals and highly complex disordered sequences. These complex forms with a fast-time map to relatively simple mathematical functions are very rare and so applying Dembski’s assumption of equal a-prior probabilities are thus able to bear the information burden. Hence at this stage in my thinking the following two questions are unsettled in my mind: a) Is it in principle possible to distribute persistence probabilities using succinct algorithms/mathematics? b) Is the physics of our world one of those succinct systems? As long as I remain unsure about the answer to these questions evolution is a “design” candidate. I’m agree there is nothing we know that obliges the configurations of this world to have high probabilities as it seems those configuration have been selected from an enormous space of possibility, thus implying some high information source. But I am in doubt about how that source burdens the information. My comment about computational irreducibility is a pessimistic “what if” worst case scenario. If the computation from an elegant set of laws to a complex assignment of persistent probabilities is computationally irreducible then, as I have already suggested, analysis is going to be difficult and the burden will be on experiment to show the way. Trouble is, simply taking a few experimental samples isn’t going to prove much either way. If this is the case then it looks to me that the argument will run and run. Note: On that adjective “personal” I probably have quite a lot in common with Brian McLaren, but I don’t want to get into any “fights” on that score! I’ve got enough on my plate with this evolution question!Timothy V Reeves
November 12, 2009
November
11
Nov
12
12
2009
03:47 AM
3
03
47
AM
PST
R0b #34
I’m not sure what hint you’re looking for. Why do you think that your function can’t be implemented with a lookup table?
My function f is Turing incomputable because it is a priori impossible to know syntactically if your answer x (argument of f) is outputted e.g. by a text pseudo-random generator or a spam engine or whatever mechanic system, rather than a human mind. It is possible to know it semantically, for example if you declare that answer x is your own design. But your declaration would not be at all computation rather ontological agency.
Regarding your function, when you say “output of a TM”, do you mean a particular TM, or is the TM also an argument of the function? If the latter, then the number of allowed TMs must be finite in order for the domain of f(x,tm) to be finite. The halting problem would render this function non-computable only if f had to work for all TMs.
The abstract definition of TM is finite. Also the definition of the construct "output of a TM" is finite. The definition of the "Boolean function with value y=1 if x is output of a TM and value y=0 if x is not" is finite too. The halting problem is not the unique case of incomputability.
I wasn’t trying to. I was merely pointing out that TMs can output profound English text, other TMs, or any other finite output, contrary to your claims.
I didn’t claim that "TMs cannot output profound English text, other TMs, or any other finite output". Indeed above I have just said that a random generator (given enough time) might mechanically output your previous answer x (which is an English text). I said that a TM cannot semantically compute another TM. To be precise, a TM can semantically compute exactly nothing for the simple fact that TMs have no understanding and without understanding no semantic.
Your premises involving non-computability are not true, so while computing theory lends an appearance of rigor to your philosophical arguments, your application of it isn’t correct.
My premise was that life/intelligence is non computable. This was the starting point of the discussion between you and me. To show that "my application of computing theory isn’t correct" you could try to prove that life/intelligence is computable.niwrad
November 12, 2009
November
11
Nov
12
12
2009
02:32 AM
2
02
32
AM
PST
niwrad:
I provided a function f with finite domain and said it was incomputable. You said it is trivially computable using a lookup table but you didn’t give an hint about such computation.
I'm not sure what hint you're looking for. Why do you think that your function can't be implemented with a lookup table? Regarding your function, when you say "output of a TM", do you mean a particular TM, or is the TM also an argument of the function? If the latter, then the number of allowed TMs must be finite in order for the domain of f(x,tm) to be finite. The halting problem would render this function non-computable only if f had to work for all TMs.
Hence the fact that a TM, as you say, “is capable of outputting canned answers” (i.e. syntactic computation) doesn’t refute the ID inference about the biological complexity.
I wasn't trying to. I was merely pointing out that TMs can output profound English text, other TMs, or any other finite output, contrary to your claims. Your premises involving non-computability are not true, so while computing theory lends an appearance of rigor to your philosophical arguments, your application of it isn't correct.R0b
November 11, 2009
November
11
Nov
11
11
2009
05:44 PM
5
05
44
PM
PST
Thanks very much for the reply Niwrad. Yes I agree, our positions aren’t very different; I accept ID’s kernel idea of a design source. I think the main difference between myself and many of the correspondents on UD is that I haven’t been able to clear evolution off my desk by consigning to the “obviously false” waste paper bin; it’s status in this respect is still unsettled in my mind. Thus, for me evolution remains a favourable candidate under consideration, a candidate that presents no outright contradiction with ID’s notion of design. So bear with me. Frontloading in the sense I mentioned in my last post is almost a trivial truism, but I suppose it’s the more subtle forms of frontloading that are the bug in the rug. This is where the controversy arises: If one can’t see the front loading because it is buried deep in the convoluted logic of the system then one might think it not to be present at all, thus seeing no need for some sort of creative dispensation (cue atheism). Alternatively, one might want to posit a more obvious form of frontloading in order to make the case for special creation less equivocal. (As per some forms of very “in yer face” ID) In the subject of being “implicit” I’ll get back to you on point 4 as briefly as I can and try to be clearer. So watch this space.Timothy V Reeves
November 11, 2009
November
11
Nov
11
11
2009
06:08 AM
6
06
08
AM
PST
R0b #31
Any function with a finite domain is trivially computable using a lookup table.
I provided a function f with finite domain and said it was incomputable. You said it is trivially computable using a lookup table but you didn’t give an hint about such computation.
I had never heard the term “semantically compute” until I read your comment. So I googled it and found what might be a relevant reference in Bram Van Heuveln’s philosophy dissertation. Is his usage of the term the same as yours?
It seems to me that Van Heuveln’s distinction between semantic computation and syntactic computation is based on the fact that the latter doesn’t represent real causation, it simulates real causation only. IOW syntactic computation processes symbols without understanding their meaning as semantic computation does. This distinction makes sense and could also be a useful concept in the AI topic. For example, we can say that computers work out syntactic computation while human mind works out semantic computation. As a consequence AI machines can only simulate human intelligence, without being real intelligence. One of the key points here is the distinction between simulation and reality.
I suspect we’re stepping outside of computing theory, and into an area of imprecisely defined concepts. [...] The question of whether a TM can understand the questions and answers is a philosophical one (strong vs. weak AI), and thus falls outside of computing theory.
The beautiful thing about the ID/evo debate (and the reason I like it) is indeed it covers a lot of interrelated philosophical and scientific fields. To return to the above issue and trying to apply it to the real case of biological complexity, this complexity arose thank to real causation, then semantic "computation". Hence the fact that a TM, as you say, "is capable of outputting canned answers" (i.e. syntactic computation) doesn’t refute the ID inference about the biological complexity. We can look at the problem also from another point of view. Let’s suppose per absurdum that the TMs present in the cells are syntactically computed by another TM. Where did this "parent" TM arise from in the first place? This way we have only shifted the problem of the arising of TMs without resolving it. Therefore we return to the point I tried to explain in my previous post: eventually a machine can produce (syntactically) other machines but at the start of the process an intelligent agent must exist, who knows the semantic of what he is doing when creating the first machine (and its embedded potentiality to produce offspring).niwrad
November 10, 2009
November
11
Nov
10
10
2009
01:20 AM
1
01
20
AM
PST
niwrad:
Say y=f(x) the Boolean function with value y=1 if x is output of a TM and value y=0 if x is not. This function f has both domain and codomain finite, its definition is finite too but f is not computable.
Any function with a finite domain is trivially computable using a lookup table.
If a TM could semantically compute a TM, we should have that a TM is able to compute the meaning of TM.
I had never heard the term "semantically compute" until I read your comment. So I googled it and found what might be a relevant reference in Bram Van Heuveln's philosophy disseration. Is his usage of the term the same as yours? If so, then I'll read the paper to find out exactly what the term means, and then respond to your point. I suspect we're stepping outside of computing theory, and into an area of imprecisely defined concepts.
In “The Emperor’s New Mind” Roger Penrose says that a computer cannot answer self-reflexive questions such as “What does it feel like to be a computer?” Here we face a similar impossible problem: a TM cannot compute the self-reflexive question “What is a TM?”.
A TM is certainly capable of outputting canned answers to those questions. The question of whether a TM can understand the questions and answers is a philosophical one (strong vs. weak AI), and thus falls outside of computing theory.R0b
November 9, 2009
November
11
Nov
9
09
2009
04:07 PM
4
04
07
PM
PST
R0b #29 Say x your answer. It is finite because is 280 bytes long. Say y=f(x) the Boolean function with value y=1 if x is output of a TM and value y=0 if x is not. This function f has both domain and codomain finite, its definition is finite too but f is not computable. In fact it is impossible by mean of simple calculations to determine its value. Of course there can be other methods to know its value but they are not computations. Therefore there can be finite non computable things. Consider a TM t with finite definition. One could think of another finite TM q whose output is t. This can work syntactically. But what if we consider the thing from a semantic point of view (in my previous comment #26 I used the adverb "ontologically" in this sense)? If a TM could semantically compute a TM, we should have that a TM is able to compute the meaning of TM. But this is a self-reference fallacy. In "The Emperor's New Mind" Roger Penrose says that a computer cannot answer self-reflexive questions such as "What does it feel like to be a computer?" Here we face a similar impossible problem: a TM cannot compute the self-reflexive question "What is a TM?". The problem can be expressed in an ID framework. A TM must be designed. It cannot design itself and cannot be designed by another TM (because non intelligent). Since life implies TMs, life is designed.niwrad
November 9, 2009
November
11
Nov
9
09
2009
02:02 PM
2
02
02
PM
PST
niwrad:
Then the question is: can a TM create a TM? My answer is “no” because a TM, respect its outputs, is a meta-concept of creation. A meta-concept cannot be generated as it were a simple output of itself. In other words ontologically a TM cannot create the concept of TM.
Your views regarding computability seem quite foreign to computing theory. Anything that's finite is computable, including a finite automaton like a TM. The question of computability has nothing to do with ontology, since computing theory deals only with abstract representations.R0b
November 9, 2009
November
11
Nov
9
09
2009
10:55 AM
10
10
55
AM
PST
Thanks for your points Timothy V Reeves, which show that our positions are not so distant after all. 1. Ok. But I am afraid that on that adjective "personal" we could discuss a lot... 2) Ok. 3) Agree. The frontloading in the sense you describe it is undeniable. It is almost a tautology that any system is frontloaded with its own potentiality. A biological embryo is frontloaded with the potentiality of developing a living being. A biological cell is frontloaded with the potentiality of sustaining/maintaining, reproducing, differentiating, etc.. These potentialities must be perfectly designed bit by bit. 4) Sorry I am not sure to understand you here. However, a) the equations at play cannot be other than those of actual science, and these don’t contain biological CSI; b) I don’t see as the layout of morphospace can provide CSI either. 5) Yes. 6) Uhmm, I am not so sure that [frontloaded macro]evolution may be humanly undecidable. I think there are good reasons to doubt frontloaded macroevolution. 7) You speak about "reducibly complex self sustaining/maintaining structures" but from these to arrive at organisms there is a long way that has to be filled with CSI. Whoever mixes "chance and necessity" only, without adding CSI, simply obtains by-products of "chance and necessity", nothing more. 9) For me the timing problem is of secondary importance. My primary issue is to know *how* life and species arose not *when* they arose. Timing is important for who bases his arguments on probability and resource spaces. I tend to base my arguments on matters of principle (and something tells me you too).niwrad
November 8, 2009
November
11
Nov
8
08
2009
01:19 PM
1
01
19
PM
PST
Thanks for the reply Niwrad. Here are some points: 1. You say The necessity of an information source holds true also for your morphospace ….. Yes, it certainly does! I’m not disputing the ID view that the universe is sourced in some kind of a priori complexity; in this connection I am personally committed to the notion of a personal God. 2) Yes, I am positing some kind of front loading. In fact I think that Dembski’s recent ideas show that however we try to cut the cloth (evolution or no evolution) somewhere in the great space of possibility an informational skew must be contrived in the form of heavily weighted probabilities (baring multiverse speculations which attempt to spread probability evenly/symmetrically - a view which has problems of its own!) 3) I can’t see how some kind of frontloading can be escaped. After all, elementary and disorganized matter is constantly being annexed into new organic structures without violation of thermodynamics or natural processes. This works, of course, because of the physical constraint inherent in the initial physical conditions in the form of preexisting biological machinery. This preexisting biological material, with the wherewithal to organize elementary and disorganized matter almost indefinitely, to my mind constitutes a form of front loading that even the most ardent second dispensationalist cannot deny. I suppose it all hinges on just what one means by “frontloading”. I’ll be interested to see your post on the subject. 4) Equations don’t generate CSI? No, I agree, but my question is actually this: Can CSI be implicit in the equations themselves in as much as they fix the appropriate connected morphospace pattern of self sustaining structures? The attempts by ID theorists that I have looked at to show that CSI cannot be generated by equations have made two assumptions a) That the CSI needs to be generated as opposed to being implicit in the equations from the outset b) That they expect those equations to directly imply biological structures, when in fact they should be looking at the layout of morphospace. 5) There is no “Darwinism” without a connected morphospace pattern of self sustaining structures - it is an implicit requirement of “Darwinism”, like it or not. 6) May I remind you of the potentially big problem of computational irreducibility. It may not be possible to get an analytical handle on any linked pattern in morphospace (assuming it exists) unless the computation is actually done in front of us. And if it was executed we may still have a big problem: Because of human limitations we are likely only to be able to sample a small part of it; thus the question of whether evolution is happening or not may be humanly undecidable. 7) I’m not here getting involved with the question of the evolution of higher forms of intelligent life, which via some form of self reference may entail non-computability a la Roger Penrose for example. I’ve tried to pair the problem down to the simpler question of the mathematical existence of a class of reducibly complex self sustaining/maintaining structures. 8) If life is a mix of “chance and necessity” (what I call “law and disorder”), then don’t forget Who is doing the mixing! 9) Are you a YEC?Timothy V Reeves
November 8, 2009
November
11
Nov
8
08
2009
08:19 AM
8
08
19
AM
PST
R0b #24
Non-computability is a very strong claim. What makes you think that life is non-computable?
When I speak of life I always mean "intelligent life", because also lower living forms show at least a reverberation (so to speak) of the effects of intelligence. So your question becomes: "why is intelligence non computable?". The topic is too important (in fact involves the deep nature of intelligence and its relations with life) that I would dedicate an entire article to it. I would ask you the patience to wait for it. Thank you. However even if with "life" we mean a mechanic process (and doing that we are applying a reductionist approach I don’t agree with), in a sense we can say that life is non-computable. In fact in my #19 comment I said that life implies Turing machines and their IC. According to Turing, computability means "to be generable by a TM". Then the question is: can a TM create a TM? My answer is "no" because a TM, respect its outputs, is a meta-concept of creation. A meta-concept cannot be generated as it were a simple output of itself. In other words ontologically a TM cannot create the concept of TM. From this point of view the concept of TM is non-computable. Since life entails the concept of TM then life too is non-computable. Hence also from a mere mechanic perspective we arrive to the apparently paradoxical conclusion that life is non-mechanic.niwrad
November 7, 2009
November
11
Nov
7
07
2009
05:37 AM
5
05
37
AM
PST
1 2

Leave a Reply