From Jonathan Bartlett (aka johnnyb) in his new book,

New Programmers Start Here

What separates modern computers from the calculating machines of the past is that modern computers are general-purpose computers. That is, they are not limited to a specific set of predesigned features. I can load new features onto a computer by inputting the right program. How did we get the idea of creating such a general-purpose machine?

It turns out that a question in philosophy led to the creation of general-purpose machines. The question was this—was there a way to create an unambiguous procedure for checking mathematical proofs? This seems like an odd question, but it was a big question in the 19th century. There had been many “proofs” where it was unclear if the proof actually proved its subject. Thus, philosophers of mathematics tried to find out if there was a way to devise what was then called an “effective procedure” for checking the validity of a mathematical proof. But that leads to another question—what counts as an “effective procedure” anyway? If I list out the steps of a procedure, how do I know that I’ve given you enough details that you can accomplish this procedure exactly as I have described it? How can I tell that my instructions are clear enough to know that the procedure that I have listed can be unambiguously accomplished?

Alan Turing and Alonzo Church both tackled this problem in the 1930s. The results showed that one could define unambiguous procedures with the help of machines. By describing a machine that could perform the operation, one can be certain that the operation of the procedure would be unambiguous.

In addition, Turing described a set of operations which could be used to mimic any other set of operations given the right input. That is, Turing defined the minimum set of features needed for a computing system to become truly programmable—where the programmer had an open-ended ability to write whatever software he wanted. Machines and programming languages that are at least as powerful as Turing’s set of features are known as Turing-complete or Universal programming languages. Nearly every modern programming language in common usage is Turing-complete.

It is interesting to note that the creation of computing came from a question in philosophy. Many are eager to dismiss the role of philosophy in academics as being impractical or unimportant. But, as we see here, like all truths, philosophical truths have a way of leading to things of deep practical importance.

And what happened to the original question—can you develop an effective procedure for checking proofs? The answer is, strangely, no. It turns out that there are true facts that cannot be proved via mechanical means. But to learn that answer, we had to develop computers first. Of course, that leads to another interesting intersection between computers and philosophy.

If there are true facts that cannot be mechanically proved, how could we know that? The only way must be because our minds cannot be represented mechanically. This puts a limit on the potential capabilities of artificial intelligence, and shows that even though computer programmers have developed some very clever means of pretending to be human, the human mind is simply outside the realm of mechanism or mechanistic simulations. – from section 2.2, on Alan Turing More.

*See also:* Marvin Minsky and artificial intelligence: Could the Internet outlive humanity?

and

What great physicists have said about immateriality and consciousness

Follow UD News at Twitter!

The point is not just that we know there are true things that cannot be mechanically proved, but that we can know what at least one of them is and we can know its truth through our own reasoning.

http://www.leaderu.com/ftissue.....essay.html

Denyse – thanks for the mention! It has long been my contention that ID has lots of usage outside of biology. My book doesn’t go into it in any more depth than is outlined here (this is in a section on the history and philosophy behind computer programming), but I certainly think that it can, and my goal here is to prep students for thinking about this in the future.

I just hope that a lot of other people follow suit and start introducing ID-oriented topics in their own disciplines. This has started to happen in economics, with Peter Thiel (cofounder of PayPal) and George Gilder (famous investor, and author of the classic Wealth and Poverty). Thiel’s book, Zero to One, is a direct application of ID theory to entrepreneurship (he makes this connection directly in his book). Gilder’s book, Knowledge and Power, introduces what he calls the information theory of capitalism, which is basically ID applied to macroeconomics.

In my more technical work, I have shown how ID can be applied to cognitive psychology, and how that in turn can impact software project estimation and development metrics. I’ve been trying to convince Winston Ewert to develop Algorithmic Specified Complexity into a theoretical model that can be used as the foundation for Test-Driven Development, but I think he finds such explorations far too far into the practical side of life 😉

In any case, while biologists may be able to bar ID from entry into their own profession, they simply are not powerful enough to bar it from entry into any other profession, where the goal is not orthodoxy but the further development of their profession.

If anyone wants to apply ID to your profession, but don’t know how, you should show up to the Alternatives to Methodological Naturalism online conference and let us help you brainstorm about it.

The bible, in proverds, is the finale word on humsn intelligence.

Wisdom, understanding, knowledge are its three pillars.

Can a computer do wisdom? NO!. It can only do memory. Can it do understanding? No. Can it do knowledge, Yes. Knowledge is memory of facts.

AI is not intelligence but artificial memorizing.

This is pretty seriously wrong. You seem to me confusing several related but different problems:

– Checking a (proposed) proof is, at least within a formal theory (essentialy a set of theorems and inference rules), pretty easy for both computer programs and humans.

– Figuring out if there is a proof of a given statement (within a particular theory) is hard, for both computer programs and humans. Mostly because there are an infinite number of possible proof sequences, and you can’t check them all.

But I think you’re actually thinking of Goedel’s first incompleteness theorem, which is different from either of these:

– There is no formal theory that contains proofs (no matter how hard to find) of all true statements about the integers (unless it also contains proofs of at least some false statements). This can also be extended to computer programs: there’s no program which, given a statement, will print “true” if and only if the statement actually is true.

In other words, proof checking is easy; truth checking is hard.

Um, no. Take a look at the structure of Goedel’s proof. He constructs a statement about the prime factorization of integers that’s equivalent to “G_T: statement G_T is not provable within the theory T”. (For a computer program, you’d construct a statement like “P_T: program P will not print ‘true’ when given P_T as input”.) Clearly, if the statement is true, T cannot prove it and therefore is incomplete; on the other hand if it’s false, T is inconsistent (technically, w-inconsistent, meaning that it’s inconsistent with the actual integers).

But Goedel doesn’t tell us which of these is the case!We can say, for certain, that there’s a problem with the theory/program, but we don’t necessarily know how to fix it! Goedel’s theorem is generally described as proving that every sufficiently general, w-consistent formal theory is incomplete, but that form starts with the assumption that the theory is w-consistent, and we don’t necessarily know that.

Actually, even if we were somehow sure the theory (/program) was w-consistent, we still wouldn’t have shown we’re beyond the capabilities of formal theory and/or computers. The mistake is thinking the Goedel statement “cannot be mechanically proved” — the Goedel statement for a particular theory or computer program cannot be proved

by that particular theory or program(unless it’s w-inconsistent), but it can certainly be proved byothertheories and programs. All we’d have shown is that we don’t have the same specific weakness that that theory/program has.There isn’t anything about Goedel’s theorem itself that’s beyond the capabiities of either formal theory or computers either. In fact, any specific statement (that can be expressed in the notation you’re using) can be proven by

sometheories, and recognized as true bysomecomputer programs. If nothing else, the theories/programs that have it as an axiom can prove it.In order to show the human mind actually exceeds the Goedel limit on formal theories and computer programs, we’d have to be able to identify all true statements about the integers, without mistakenly thinking any false ones are true. In fact, humans regularly fail that test in both directions: staring at statemets and not being able to figure out if they’re true or false, and thinking thing are true when they’re actually false. In fact, we’re so bad at it that we tend to turn to formal theory and computers to help us out with them — they’re not perfect, but in many ways they’re better than (unassisted) humans.

In light of that, I don’t see any actual basis for the ultimate claim: