10 Replies to “Solving Engineering Problems Using Theology

  1. 1
    Seversky says:

    Does this involve praying that the finished product works as it was intended to?

  2. 2
    EricMH says:

    At least that’s better than the Darwinian approach of throwing together arbitrary software projects, randomly typing code with no overarching goal and hoping something good comes out of it. Unfortunately, Darwinian software development is more common.

  3. 3
    anthropic says:

    Very interesting! Well done, and thanks for sharing. AAI was particularly intriguing.

    However, I did find the throat clearing coughs a bit distracting.

  4. 4
    johnnyb says:

    Sorry – I am coming off of bronchitis. I actually edited out the coughs which I could (i.e., they were far enough away from actual words that I could do so).

  5. 5
    critical rationalist says:

    While Alan Turing might have defined what a Universal Turing Machine was, he didn’t invent the first UTM. Rather, Charles Babbage’s Analytic machine would have been a true UTM had it actually been built. (Not to be confused with his Difference Engine, which was not a true UTM) However, virtually no one recognized the value of what Babbage had designed and the idea was independently developed by Turing.

    Also, I would suggest the problem with Artificial General Intelligence is due to a confusion about how human being create new knowledge. IOW, it’s an issue of philosophy of the growth of knowledge, not technology, that is holding us back. That mistake is present in the idea that human beings mechanically derive the contents of theories from observations. We guess, then criticize our guesses.

    A true AGI would be able to create new knowledge like we do, which is outlined in this article.

    It’s unclear how this actually solves engineering problems. Rather it assumes that the problem of AGI is not solvable, as opposed to currently unsolved and requiring a work around until such time that it is. Furthermore, the steps we would take seem to be identical, regardless if theology is involved.

  6. 6
    LocalMinimum says:

    CR @ 5:

    While I do believe that general AI is not only possible, but is coming around the corner soonish, the article falsely equivocates such with experiential consciousness. It’s another “ignore it solution” of the hard problem as the passenger of a reasonable/technical proposal. He assumes consciousness emerges from matter, then proposes that we can make consciousness emerge from matter. Well, point B seems to natural follow from point A, but getting from point A’ to point A was a blindfolded helicopter ride.

    Getting back to AGI, the article seems to be stating that pure induction does not produce knowledge. I could relate pure induction to biological evolution: neither have any foresight or generally useful direction, and are doomed to become stranded on islands of functionality/applicability surrounded by oceans of non-functionality and circular/non-results.

    Following the article, it is conjecture and theoretical rule building that save the day. Being able to construct and modify systems of rules to their logical ends to maximize/complete inductive correlations and then test for correlations outside of expectations readily at hand is how you scaffold over/step out of these informational potential wells.

    My personal vision of a general AI always was an expert system equipped with tools to build and modify arbitrarily nested hierarchal structures of rules/math to better correlate with empirical experience. Basically, just build and explore rule sets that can produce the result set. It would still just be a really fancy calculator, though. General AI could just be a buzzword for a threshold in metaprogramming, a term we won’t even remember once such technology becomes a ubiquitous gradient, like “video phones”.

    Experiential consciousness, however, remains a hard problem.

  7. 7
    Eric Anderson says:

    Seversky @1:

    LOL!

  8. 8
    EricMH says:

    If AGI is unsolvable the main engineering implication is the human mind can do things computers cannot do. We need to figure out how to best utilize the mind’s superior capability and cease trying to replicate it.

  9. 9
    critical rationalist says:

    He assumes consciousness emerges from matter, then proposes that we can make consciousness emerge from matter. Well, point B seems to natural follow from point A, but getting from point A’ to point A was a blindfolded helicopter ride.

    The author is saying conciseness would be a side effect of AGI, as opposed to conciseness is some kind of starting point in some justificationist role. For example, we are constantly criticizing ideas that pop into our minds at the subconscious level. It’s just that most of the time were not aware of it.

    I could relate pure induction to biological evolution: …

    But that’s not what the theory of Neo-darwinism suggests. It’s a theory of competing replicators that are imperfectly copied, which isn’t specific to biology. Evolution doesn’t mechanically derive knowledge from some source any more than people derive the contents of theories from observations. In both cases, variations are made and criticized. In the case of evolution, those variations are random to any problem to solve. In the case of people variations are often specific to a problem to solve. But, neither derive the contents of knowledge from anything. The same would be said for AGI. It would output new explanatory theories about how the world works that were not initially input.

    As the article points out, it’s an issue of epistemological philosophy, not megahertz and gigabytes.

  10. 10
    critical rationalist says:

    To clear one thing up, I’m not suggesting that neo-darwinism creates the same kind of knowledge that people do, as there are two kinds: explanatory knowledge, that only people can create, and non-explanatory knowledge, that both evolution and people can create.

    The latter can only be created by people because only people can conceive of specific problems and conjecture specific explanatory theories about how the world work, in reality, to solve those problems. AGI would have that same capacity, so it would qualify as a person as well.

Leave a Reply