Uncommon Descent Serving The Intelligent Design Community

Human languages are irreducibly complex?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Says this German mag, in translation:

Farewell to the World Formula The laws of nature are ephemeral Natural laws are in line with established opinion to immutable component of the natural sciences. A physicist and a philosopher now say goodbye to the idea. by Edu

Why so and not otherwise?

Until recently was Lee Smolin, of the Perimeter Institute in Waterloo thinkers from Canada, expire this idea. But now he opposes her, along with the Brazilian philosopher Roberto Mangabeira Unger of Harvard Law School. They have a thick book published entitled “The Singular Universe and the Reality of Time”. In it they go from “most interesting feature of the natural world”, namely the fact “that it is what it is and not something else.”

As trivial as it sounds, so explosive is the thesis in professional circles. They attacked frontally string theory whose plurality of parallel universes considered our universe as a mere coincidence. The singularity theory emerged but deeper, quasi the source of the cosmic flow of time. Why it exists at all? According to the classical conception of time and space are not really physics, rather they form the “eternal” framework in which is happening the natural disaster. A metaphysical idea. Einstein’s greatest achievement in the general theory of relativity was that he transformed this metaphysics into physics, time and space melted the dynamic physical field of spacetime. But even this spacetime is still subject to immutable laws – the Einstein equations – that determine how matter transforms the space-time. But what happens when these laws themselves also changed?

No, we don’t entirely get it either, but they may be thinking of something like what Neil Turok was trying to say, at the Perimeter Institute in Canada: Grow up.

Readers?

Follow UD News at Twitter!

Comments
While human language is very simple use of memorized sounds IT DOES seem to call for a translation of out soul thoughts. Are thoughts complex? I think the original language would of been simply thoughts represented by the tones or sounds. Adam spoke right away but surely it wasn't Gods language he spoke. It must be thoughts speeded up to sounds originally.Robert Byers
July 5, 2015
July
07
Jul
5
05
2015
08:09 PM
8
08
09
PM
PDT
No.Roy
July 5, 2015
July
07
Jul
5
05
2015
07:13 AM
7
07
13
AM
PDT
The translated OP is a perfect example of Erik J. Larson's contention that computer translation will never equal a human translation since computers cannot take the context of a sentence into consideration when translating it:
What Is a Mind? More Hype from Big Data - Erik J. Larson - May 6, 2014 Excerpt: In 1979, University of Pittsburgh philosopher John Haugeland wrote an interesting article in the Journal of Philosophy, "Understanding Natural Language," about Artificial Intelligence. At that time, philosophy and AI were still paired, if uncomfortably. Haugeland's article is one of my all time favorite expositions of the deep mystery of how we interpret language. He gave a number of examples of sentences and longer narratives that, because of ambiguities at the lexical (word) level, he said required "holistic interpretation." That is, the ambiguities weren't resolvable except by taking a broader context into account. The words by themselves weren't enough. Well, I took the old 1979 examples Haugeland claimed were difficult for MT, and submitted them to Google Translate, as an informal "test" to see if his claims were still valid today.,,, ,,,Translation must account for context, so the fact that Google Translate generates the same phrase in radically different contexts is simply Haugeland's point about machine translation made afresh, in 2014. Erik J. Larson - Founder and CEO of a software company in Austin, Texas http://www.evolutionnews.org/2014/05/what_is_a_mind085251.html
of related note: The following site has some easy examples of the types of questions that would trip a computer up in a Turing test:
Artificial Intelligence or intelligent artifices? - June 3, 2013 https://uncommondescent.com/intelligent-design/artificial-intelligence-or-intelligent-artifices/
Of particular note from the preceding article: since a computer has no free will to invent information, nor a consciousness so as to take context into consideration, then one simple way of defeating the Turing test is to tell, or to invent, a joke:,,,
“(a computer) lacks the ability to distinguish between language and meta-language.,,, As known, jokes are difficult to understand and even more difficult to invent, given their subtle semantic traps and their complex linguistic squirms. The judge can reliably tell the human (from the computer)” Per niwrad https://uncommondescent.com/intelligent-design/artificial-intelligence-or-intelligent-artifices/
Such as this joke:
Turing Test Extra Credit – Convince The Examiner That He’s The Computer – cartoon http://imgs.xkcd.com/comics/turing_test.png
or this one
Turing Test - cartoon http://static.existentialcomics.com/comics/turingTest.jpg
Related notes:
For Artificial Intelligence, Humor Is a Bridge Too Far - November 13, 2014 Excerpt: The article reminded me of an exercise in one of my first programming books that made me aware of the limits of computers and AI. I've forgotten the author of the book, but the problem was something like the following: "Write a program that takes in a stream of characters that represent a joke, reads the input and decides whether it's funny or not." It's a perfect illustration of Erik's statement, "Interestingly, where brute computation and big data fail is in surprisingly routine situations that give humans no difficulty at all." Even when my grandchildren were very young I marveled at how they grasped the humor of a joke, even a subtle one. http://www.evolutionnews.org/2014/11/for_artificial_091211.html Algorithmic Information Theory, Free Will and the Turing Test - Douglas S. Robertson Excerpt: Chaitin’s Algorithmic Information Theory shows that information is conserved under formal mathematical operations and, equivalently, under computer operations. This conservation law puts a new perspective on many familiar problems related to artificial intelligence. For example, the famous “Turing test” for artificial intelligence could be defeated by simply asking for a new axiom in mathematics. Human mathematicians are able to create axioms, but a computer program cannot do this without violating information conservation. Creating new axioms and free will are shown to be different aspects of the same phenomenon: the creation of new information. ,,,The basic problem concerning the relation between AIT (Algorithmic Information Theory) and free will can be stated succinctly: Since the theorems of mathematics cannot contain more information than is contained in the axioms used to derive those theorems, it follows that no formal operation in mathematics (and equivalently, no operation performed by a computer) can create new information. http://cires.colorado.edu/~doug/philosophy/info8.pdf
bornagain77
July 5, 2015
July
07
Jul
5
05
2015
04:43 AM
4
04
43
AM
PDT
Ah, yes, a machine's attempt to translate one human's ideas into another human's language. It takes experience with both languages and a fair bit of artistry. In the Sci Fi novel "The Tomorrow File", the breakthrough in translating human thoughts came when one of the guys realized that you can't translate the individual words. You have to translate whole sentences to the equivalent sentence. And that still leaves you with cultural tie-ins implied or suggested by the specific words and the way the thought is phrased. So, yeah, I kinda get the guy's drift, but I'll wait for a real translation.mahuna
July 4, 2015
July
07
Jul
4
04
2015
06:18 PM
6
06
18
PM
PDT

Leave a Reply