Irreducible Complexity language News

Human languages are irreducibly complex?

Spread the love

Says this German mag, in translation:

Farewell to the World Formula The laws of nature are ephemeral Natural laws are in line with established opinion to immutable component of the natural sciences. A physicist and a philosopher now say goodbye to the idea. by Edu

Why so and not otherwise?

Until recently was Lee Smolin, of the Perimeter Institute in Waterloo thinkers from Canada, expire this idea. But now he opposes her, along with the Brazilian philosopher Roberto Mangabeira Unger of Harvard Law School. They have a thick book published entitled “The Singular Universe and the Reality of Time”. In it they go from “most interesting feature of the natural world”, namely the fact “that it is what it is and not something else.”

As trivial as it sounds, so explosive is the thesis in professional circles. They attacked frontally string theory whose plurality of parallel universes considered our universe as a mere coincidence. The singularity theory emerged but deeper, quasi the source of the cosmic flow of time. Why it exists at all? According to the classical conception of time and space are not really physics, rather they form the “eternal” framework in which is happening the natural disaster. A metaphysical idea. Einstein’s greatest achievement in the general theory of relativity was that he transformed this metaphysics into physics, time and space melted the dynamic physical field of spacetime. But even this spacetime is still subject to immutable laws – the Einstein equations – that determine how matter transforms the space-time. But what happens when these laws themselves also changed?

No, we don’t entirely get it either, but they may be thinking of something like what Neil Turok was trying to say, at the Perimeter Institute in Canada: Grow up.

Readers?

Follow UD News at Twitter!

4 Replies to “Human languages are irreducibly complex?

  1. 1
    mahuna says:

    Ah, yes, a machine’s attempt to translate one human’s ideas into another human’s language.

    It takes experience with both languages and a fair bit of artistry.

    In the Sci Fi novel “The Tomorrow File”, the breakthrough in translating human thoughts came when one of the guys realized that you can’t translate the individual words. You have to translate whole sentences to the equivalent sentence. And that still leaves you with cultural tie-ins implied or suggested by the specific words and the way the thought is phrased.

    So, yeah, I kinda get the guy’s drift, but I’ll wait for a real translation.

  2. 2
    bornagain77 says:

    The translated OP is a perfect example of Erik J. Larson’s contention that computer translation will never equal a human translation since computers cannot take the context of a sentence into consideration when translating it:

    What Is a Mind? More Hype from Big Data – Erik J. Larson – May 6, 2014
    Excerpt: In 1979, University of Pittsburgh philosopher John Haugeland wrote an interesting article in the Journal of Philosophy, “Understanding Natural Language,” about Artificial Intelligence. At that time, philosophy and AI were still paired, if uncomfortably. Haugeland’s article is one of my all time favorite expositions of the deep mystery of how we interpret language. He gave a number of examples of sentences and longer narratives that, because of ambiguities at the lexical (word) level, he said required “holistic interpretation.” That is, the ambiguities weren’t resolvable except by taking a broader context into account. The words by themselves weren’t enough.
    Well, I took the old 1979 examples Haugeland claimed were difficult for MT, and submitted them to Google Translate, as an informal “test” to see if his claims were still valid today.,,,
    ,,,Translation must account for context, so the fact that Google Translate generates the same phrase in radically different contexts is simply Haugeland’s point about machine translation made afresh, in 2014.
    Erik J. Larson – Founder and CEO of a software company in Austin, Texas
    http://www.evolutionnews.org/2.....85251.html

    of related note:

    The following site has some easy examples of the types of questions that would trip a computer up in a Turing test:

    Artificial Intelligence or intelligent artifices? – June 3, 2013
    http://www.uncommondescent.com.....artifices/

    Of particular note from the preceding article: since a computer has no free will to invent information, nor a consciousness so as to take context into consideration, then one simple way of defeating the Turing test is to tell, or to invent, a joke:,,,

    “(a computer) lacks the ability to distinguish between language and meta-language.,,,
    As known, jokes are difficult to understand and even more difficult to invent, given their subtle semantic traps and their complex linguistic squirms. The judge can reliably tell the human (from the computer)”
    Per niwrad
    http://www.uncommondescent.com.....artifices/

    Such as this joke:

    Turing Test Extra Credit – Convince The Examiner That He’s The Computer – cartoon
    http://imgs.xkcd.com/comics/turing_test.png

    or this one

    Turing Test – cartoon
    http://static.existentialcomic.....ngTest.jpg

    Related notes:

    For Artificial Intelligence, Humor Is a Bridge Too Far – November 13, 2014
    Excerpt: The article reminded me of an exercise in one of my first programming books that made me aware of the limits of computers and AI. I’ve forgotten the author of the book, but the problem was something like the following: “Write a program that takes in a stream of characters that represent a joke, reads the input and decides whether it’s funny or not.”
    It’s a perfect illustration of Erik’s statement, “Interestingly, where brute computation and big data fail is in surprisingly routine situations that give humans no difficulty at all.” Even when my grandchildren were very young I marveled at how they grasped the humor of a joke, even a subtle one.
    http://www.evolutionnews.org/2.....91211.html

    Algorithmic Information Theory, Free Will and the Turing Test – Douglas S. Robertson
    Excerpt: Chaitin’s Algorithmic Information Theory shows that information is conserved under formal mathematical operations and, equivalently, under computer operations. This conservation law puts a new perspective on many familiar problems related to artificial intelligence. For example, the famous “Turing test” for artificial intelligence could be defeated by simply asking for a new axiom in mathematics. Human mathematicians are able to create axioms, but a computer program cannot do this without violating information conservation. Creating new axioms and free will are shown to be different aspects of the same phenomenon: the creation of new information.
    ,,,The basic problem concerning the relation between AIT (Algorithmic Information Theory) and free will can be stated succinctly: Since the theorems of mathematics cannot contain more information than is contained in the axioms used to derive those theorems, it follows that no formal operation in mathematics (and equivalently, no operation performed by a computer) can create new information.
    http://cires.colorado.edu/~dou...../info8.pdf

  3. 3
    Roy says:

    No.

  4. 4
    Robert Byers says:

    While human language is very simple use of memorized sounds IT DOES seem to call for a translation of out soul thoughts.
    Are thoughts complex? I think the original language would of been simply thoughts represented by the tones or sounds. Adam spoke right away but surely it wasn’t Gods language he spoke.
    It must be thoughts speeded up to sounds originally.

Leave a Reply