Uncommon Descent Serving The Intelligent Design Community

Bill Dembski: Machines will never supersede humans!


On July 11, the Walter Bradley Center for Natural and Artificial Intelligence officially launched and design theorist William Dembski offered some thoughts:

The Walter Bradley Center, to the degree that it succeeds, will not merely demonstrate a qualitative difference between human and machine intelligence; more so, it will chart how humans can thrive in a world of increasing automation. …

Yet the Walter Bradley Center exists not merely to argue that we are not machines. Yes, singularity theorists and advocates of strong AI continue to vociferate, inflating the prospects and accomplishments of artificial intelligence. They need a response, if only to ensure that silence is not interpreted as complicity or tacit assent. But if arguing, even persuasively, with a Ray Kurzweil or Nick Bostrom that machines will never supersede humans is the best we can do, then this center will have fallen short of its promise.

The point is not merely to refute strong AI, the view that machines will catch up to and eventually exceed human intelligence. Rather, the point is to show society a positive way forward in adapting to machines, putting machines in service of rather than contrary to humanity’s higher aspirations.

Unfortunately, rather than use AI to enhance our humanity, computational reductionists increasingly use it as a club to beat our humanity, suggesting that we are well on the way to being replaced by machines. Such predictions of human obsolescence are sheer hype. Machines have come nowhere near attaining human intelligence, and show zero prospects of ever doing so. I want to linger on this dim view of AI’s grand pretensions because it flies in the face of the propaganda about an AI takeover that constantly bombards us.

It is straightforward to see that zero evidence supports the view that machines will attain and ultimately exceed human intelligence. And absent such evidence, there is zero reason to worry or fear that they will. So how do we see that? We see it by understanding the nature of true intelligence, as exhibited in a fully robust human intelligence, and not letting it be confused with artificial intelligence.

What has artificial intelligence actually accomplished to date? AI has, no doubt, an impressive string of accomplishments: chess playing programs, Go playing programs, Jeopardy playing programs just scratch the surface. Consider Google’s search business, Facebook’s tracking and filtering technology, and the robotics industry. Automated cars seem just around the corner. In every case, however, what one finds with a successful application of AI is a specifically adapted algorithmic solution to a well-defined and narrowly conceived problem.

The link for the whole talk will be available at the Mind Matters Today blog at the Walter Bradley Center shortly.

Here are some recent Mind Matters Today stories to check out:

Will AI lead to mass joblessness and social unrest? Jay Richards, author of The Human Advantage: The Future of American Work in an Age of Smart Machines, says, the main problem is learning to adapt to a rapid-moving AI economy. As result, he thinks that we should not fear the robots so much as the robot philosophers.

AI machines taking over the world? It’s a Cool apocalypse but does that make it more likely?
The usual problems with doomsaying also apply to predictions for artificial intelligence. For example, most doomsdays of any kind don’t happen because many unforeseen sequences of decisions and events change the scene.

AI can mean ultimate Big Surveillance: That’s what we should really worry about.The celebrity worry about superintelligent AI taking over and getting rid of us humans distracts our attention from a real-world fact: Artificial intelligence (AI) maximizes the opportunities while crashing the costs of corporate and government surveillance. Both have grown massively in recent years, with predictable results.

Self-driving cars hit an unnoticed potholeOne good thing about Dixon’s predictions is that they are specific, unlike the AI apocalypses that gather a crowd for science celebs. He raises practical questions: Is Uber a good part-time job in the long term? Is long-haul trucking a wise career choice? If governments earmark money for self-driving lanes, is that a future benefit to most citizens or only a few? Meanwhile, at Venturebeat, computer vision researcher Filip Pieniewski thinks that AI winter is setting in…

More coming soon.

See also: Announcement: Walter Bradley Center for Natural and Artificial Intelligence launches Wednesday, July 11


Announcement: New Walter Bradley Center to assess claims for artificial intelligence critically

FourFaces@18, Isn't the creator that programmed all those functions the intelligence, rather than the robot? Who's "various goals"? Robots don't have goals. This definition works: "the ability to acquire and apply knowledge and skills". But it was never meant to be applied to machines... which simply execute a program without acquiring any knowledge and skills. Think AlphaGo will ever be doing the tango? When you say: "Machines can use randomness and algorithms to generate new art", you're effectively saying: "the hammer nails the roof" and "the excavator builds the road". Free Will, Intelligence, and Creativity go together. Nonlin.org
Fourfaces @15, Thanks for response. Been very busy. I'll respond later tomorrow. DATCG
BA77 @13, Thanks, I agree with Michael Egnor's definitions and reasoning. On the Apple PC he mentioned in the last sentence of the article, found this gem of an Advertisement...
The first Apple computer The Apple I (Apple 1) was the first Apple computer that originally sold for $666.66. The computer kit was developed by Steve Wozniak in 1976 and contained a 6502 8-bit processor and 4 kb of memory, which was expandable to 8 or 48 kb using expansion cards. Although the Apple I had a fully assembled circuit board the kit still required a power supply, display, keyboard, and case to be operational. Below is a picture of an Apple I from an advertisement by Apple.
from this link... https://www.computerhope.com/issues/ch000984.htm DATCG
Nonlin.org @17, If someone builds a robotic cook that can walk into a generic kitchen and fix a breakfast meal of scrambled eggs with toast and cappuccino, I would definitely say that the robot is intelligent. I agree with you that AI programs are not creative. But one does not need creativity to be intelligent. Intelligence means having a causal or common sense understanding of the world and being able to use this understanding to achieve various goals. Creativity is not an intelligence phenomenon. It's purely spiritual. Machines can use randomness and algorithms to generate new art but they have no idea whether or not the art is beautiful. FourFaces
Their “superior” intelligence will be acquired as they learn. They will be tireless learners and faster, too, because their hardware will work faster. So they will be able to read books or watch videos at a much faster rate than humans. We may also give them sensory capabilities beyond those of humans.
It seems all these have happened already. Yet AI is not much different than any other ordinary tool such a hammer. I would equate intelligence with creativity rather than brute computing power. And we have seen no creativity whatsoever from any AI. Nonlin.org
Nonlin.org @14: how can humans create machines that are more intelligent than themselves? Well, I don't think they can be more intelligent in the sense of having a more advanced brain. Their brains will be patterned after ours and use pretty much the same principles. Their "superior" intelligence will be acquired as they learn. They will be tireless learners and faster, too, because their hardware will work faster. So they will be able to read books or watch videos at a much faster rate than humans. We may also give them sensory capabilities beyond those of humans. Contrary to other AI researchers, I don't think a machine can be superintelligent in the sense that humans will be like monkeys or rats in comparison. There is a limit to the intelligence of a single brain, artificial or otherwise. It has to do with its hierarchical structure and the need to focus on one thing at a time. Machines, like humans, will have to specialize and form societies. Human civilization can be said to be superintelligent because it can accomplish things that no single human can. Machines will be similarly organized, IMO. It will be an age of plenty for all, an age of us sitting by the pool, eating sushi and drinking good wine while our synthetic servants attend to our needs and wishes. Unless, of course, we destroy ourselves first. That would be a bummer. FourFaces
DATCG @12: The question then becomes can “free will” be coded – or allowed from a framework of options. Evidently it can “be coded or allowed” if we believe we have free will today. And evidently free will can be deceived into doing evil. In my opinion, free will cannot be coded in a computer because it requires a non-physical cause. A computer only works according to physical causes and effects. It is obvious, at least to me, that human free will exists only because we have a soul that can interact with certain neurons in the cortex in order to choose among multiple courses of actions presented to it by the brain. These include choices of what to pay attention to. By the way, I don't believe animals have free will because I don't believe they have souls. Animal behavior reflects the instincts and learning abilities genetically programmed into them by the original designers. FourFaces
FourFaces@3, Interesting views, but how can humans create machines that are more intelligent than themselves? ... 2. AI is expected by many to eventually surpass human intelligence just as power machines dwarf biologic power. This is unlikely because the source of AI intelligence is the human creativity that designed the device. Mechanical tools can reach many more times our natural power because our intelligence rather than our power is the source of their power. But there is nothing foreseeable we can contribute to the AI tool to bootstrap its intelligence beyond ours. A cat cannot design a cat and a chimp cannot design a chimp. How convenient but unlikely would it be that the level of intelligence required to bootstrap human intelligence would be just below our own intelligence? And from there, how would a machine bootstrap its own intelligence to a higher level – a feat humans have not even attempted seriously? ... http://nonlin.org/ai/ Nonlin.org
Can a Computer Think? - Michael Egnor - March 31, 2011 Excerpt: The Turing test isn’t a test of a computer. Computers can’t take tests, because computers can’t think. The Turing test is a test of us. If a computer “passes” it, we fail it. We fail because of our hubris, a delusion that seems to be something original in us. The Turing test is a test of whether human beings have succumbed to the astonishingly naive hubris that we can create souls. It’s such irony that the first personal computer was an Apple. https://evolutionnews.org/2011/03/failing_the_turing_test/
So, yes, the "machine" or "code" will still be coded or hacked code will change it. The question then becomes can "free will" be coded - or allowed from a framework of options. Evidently it can "be coded or allowed" if we believe we have free will today. And evidently free will can be deceived into doing evil. So, maybe if given enough time, we eventually discover how to create for lack of better words, or code for free will? And again, such free will can be deceived? Or why would the Creator in Genesis 3:22 care to stop human beings from living an eternal existence? I don't think God was concerned about us living forever. But what we can attain as a result over time? Gen 3:22 The LORD God said, "Behold, the man has become like one of us, knowing good and evil. Now, lest he put forth his hand, and also take of the tree of life, and eat, and live forever..." Theologically, maybe I'm on unsound ground in such an interpretation? But note that God is not concerned about us living forever. Only that like "us" or God, that we would "know good and evil" and unlike God, may not be able(edit: or willing) to control it. DATCG
Fourfaces @8, Hmmmm, I agree machines will do as they're coded. Though hacking them could provide other services to the hacker that are not intended, but unseen as a consequence of the original designer's "short-sightedness." Well, maybe the assumption is "bad" actors will - not - create "bad" machines using artificial intelligence? Though history shows nations create all forms of devices to take advantage and conquer other nations. What's to prevent a nation from creating an AI device loyal to that nation, but with nefarious, even killer-instincts against all other nation? Which thinking about it, might be hacked to attack it's original nation-creator? Just some thoughts. DATCG
FourFaces @9, It's an interesting thought. Might be something to look at with advice from some non-profit service. Not sure how well it can be fought against on terms of discrimination? DATCG
DATCG @7, It should be grounds for a lawsuit, no? FourFaces
DATCG @4, In my opinion, hacked or not, machines will always do what they are told to do. Intelligence is always at the service of motivation, not the other way around. Intelligent machines will get their motivations from their human trainers. It will be hard, although not impossible, to hack future intelligent machines. Such a machine will have to somehow be duped into believing a lie. FourFaces
FourFaces @5, Yes, that's been true for quite sometime now. DATCG
Have you been pawned? Unfortunately tens of millions of Americans both public and private have. If you like to check yourself... https://haveibeenpwned.com tracks password hacks by the hundreds of millions with listings of known large hacks. DATCG
By the way, UD, did you know that Google Alerts completely ignores the Uncommon Descent site? I have an alert for "artificial intelligence" and UD articles never show up. The censorship against Christian sites by Google is out of control. FourFaces
FourFaces, Machines are made to kill today. The only limitation is the Designer's ethics. Therefore, 1) machines with AI in the future could be "trained and conditioned" to kill humans. 2) machines can be hacked to disobey humans by hackers or enemy combatants desiring to kill humans. 3) Number 2 is a bit trivial? Since we have observed evidence that humans hack coded systems today by the millions(edit that should be billions) and use them against the owners will. In fact, I routinely update security systems against hackers. It's always a race. AI will allow this at greater detail, authority and pace. Do we not already have evidence a creation can turn against the Creator? If we believe in God, we do. Then why can these sequence of events not repeat in the future if not stopped by humans? Unless of course we run out of time. DATCG
I am an ID proponent, a yin-yang dualist and an AGI (artificial general intelligence) researcher. I am a rarity in this field. Almost all AGI researchers are convinced that humans are just meat machines with no souls. I disagree with Dr. Dembski to a large extent. There is no question in my mind that machines will achieve and even surpass our intelligence. Intelligence is a material cause-effect phenomenon performed by the physical brain. If a machine can surpass a human being in a narrow domain such as chess or GO, it follows that machines that can do so in many domains will certainly arrive. We should not fear this eventuality. The whole point of creating tools is that they can do things for us that we cannot do on our own. And yes, intelligent machines will eventually replace almost all workers. The only reason we have to fear this is that we are slaves in a slave system even though we have been duped into believing that we have liberty. But that's a different topic for another time and place. The only areas that machines cannot compete are spiritual ones. For example, machines can never appreciate beauty and the arts because beauty is not a physical property of the universe. It is a purely spiritual entity. A machine cannot see beauty or ugliness in a pattern unless we tell tell it that the pattern is beautiful or ugly to humans. Even then, they will simply associate certain classes of patterns with beauty and ugliness. Finally, the idea that machines will rebel against humanity and wipe us is just brain-dead materialist superstition. Machines will do exactly what they are trained and conditioned to do. My two cents. :-D FourFaces
The current narrative is that machines will replace so many jobs (which has always happened in the past), and so we have to figure out how to accommodate the jobless, which sounds like a 'make work' scheme. However, this narrative treats the overwhelming capabilities of AI as a foregone conclusion. What if it actually is the case that human/computer hybrids are just much more effective than pure AI systems? Then it is not a matter of trying to 'adjust' to AI, but rather trying to be the first to hop on the hybrid train before it leaves the station. EricMH
Machines might not entirely supersede humans but they will replace many humans in a lot of jobs. This will be a problem for people who then can't find satisfying work. aarceng

Leave a Reply