Uncommon Descent Serving The Intelligent Design Community
Category

Artificial Intelligence

Walter Bradley: Tell people about AI, not sci-fi

His struggle to bring reality to“sci-fi” origin of life research is Intro of the Walter Bradley Center’s inspiration for our work on AI: The Bradley Center hopes to have a similar effect by promoting more general knowledge of fundamental issues around “thinking computers and questions around the real effects of technology on human well-being. A friend sought to involve him in evolution issues in the mid-Seventies. He didn’t see how he could help; his specialty was materials science, where the subjects are interesting, “but they’re also dead.” He offered to evaluate research into the origin of life instead because he could better evaluate claims for the chemistry of non-living materials. There, he encountered a surprise: “It was very clear to me Read More ›

It is possible to demonstrate that AI will never think as humans do

Based on what we know of how algorithms work, it can be demonstrated mathematically that algorithms cannot deal with non-computable concepts: There is another way to prove a negative besides exhaustively enumerating the possibilities With artificial general intelligence (AGI), if we can identify something algorithms cannot do, and show that humans can do it then we’ve falsified the AGI position without running an infinite number of experiments across all possible algorithms. Eric Holloway, “The Flawed Logic behind “Thinking” Computers, Part II” at Mind Matters If Eric is correct, a great deal of the hype we hear in media is based not only on improbable concepts (the usual stuff) but impossible ones. See, for example, Top Ten AI hypes of 2018 Read More ›

“Thinking” computers? Some logical problems with the idea

If an algorithm that reproduces human behavior requires more storage space than exists in the universe, it is a practical impossibility that also demonstrates the logical impossibility of artificial intelligence, Eric Holloway argues. He engaged in a three-part debate on the subject. Here’s the first part: The most basic sort of algorithm that can mimic human action is one that reproduces a recording of human behavior. So, one example of algorithmic intelligence the following print statement: print: “So, one example of algorithmic intelligence the following print statement.” And the program prints the sentence. So there you have it, an intelligent computer program! Admittedly, this is a silly example but it makes the point that intelligence is more than just functionalism. Read More ›

AI is not the artist’s new “robot overlord”

Software engineer and musician Brendan Dixon thinks AI is the perfect tool for creating social noise: If you believe all you read, AI is once again nipping at the heels of our humanity, this time by “creating” music all on its own (lyrics included). Soon we must submit to our “robot overlords.” Or not. The achievement celebrated at Digital Music News is, as so often, less than heralded and does not portend AI overtaking humanity. It mainly shows that few engineers understand art and even fewer artists understand engineering. Both look at (or listen to) the “work” and see more than is present. And both are wrong… Creating art begins by fully absorbing what makes art good and then extrapolating Read More ›

A philosopher explains why machines are not creative

When you consider all the reasons why machines cannot be creative, one must ask, is the belief that we can build superintelligent machines rooted in naturalism (nature is all there is), often called “materialism,” or in evidence? Read More ›

Naturalists (materialists) can’t believe in love

They try but somehow the love story just won’t tell itself in a way that makes any sense: It may sound rational to conjecture that love is merely an emergent property of consciousness that has matured throughout the course of human evolution. But emergence is no less of a “god of the gaps” belief than Zeus’s lighting or Thor’s thunder. Zoe is a great film but it presents a storyline often used to show how inexplicable and ineffable love is in order to get me to believe that it isn’t. For example, the underlying dogma assumes reductionism (everything is material). Thus, the question addressed isn’t the obvious one, “Can a synthetic love a human?”; it is “Can a human love Read More ›

Detroit: Become Human – Adam Nieri on the twin pillars of the AI religion

Nieri looks at them as the narrative of the sci-fi game Detroit: Become Human develops them:  A Closer Look at Detroit: Become Human, Part I Gaming culture provides a window into our culture’s assumptions about artificial intelligence (Adam Nieri) In the game, Detroit has transcended its current economic despair, emerging as the epicenter of the android revolution. Cyberlife, headquartered there, has become the first company to engineer and produce fully autonomous, general purpose AI androids for consumers. A Closer Look at Detroit: Become Human, Part II Adam Nieri: One pillar, if you like, of the worldview of the “Church of AI” is the belief that our embrace of artificial intelligence is a step on the road to a higher form Read More ›

Some reasons why machines won’t take over

Even if some people would like them to. In case the subject comes up over coffee. For example, ● Finally, physicist Alfredo Metere of the International Computer Science Institute (ICSI) insists that AI must deal in specifics but humans live in an indefinitely blurry world that is always changing: AI is a bunch of mathematical models that need to be realised in some physical medium, such as, for example, programs that can be stored and run in a computer. No wizards, no magic. The moment we implement AI models as computer programs, we are sacrificing something, due to the fact that we must reduce reality to a bunch of finite bits that a computer can crunch on. Alfredo Metere, “AI Read More ›

Eric Holloway: The Brain Exceeds the Most Powerful Computers in Efficiency

Human thinking takes vastly less computational effort to arrive at the same conclusions: For example, using a rough estimate for processing, let’s say the DeepMind AlphaGo Zero AI takes 16 quintillion CPU cycles of training, that is, (a thousand raised to the power of six (1018), to exceed a human level of play in Go. On the other hand, let’s say a conscious human being can execute the equivalent of 50 bits per second and concentrates on Go and related skills for an entire lifetime. This effort requires 120 billion CPU cycles, which is less than the AI requirement. Thus, AlphaGo Zero would need to be 100 million times more efficient (a factor of about 100 million for improvement in CPU cycles) Read More ›