In a new book, Baylor University’s Robert J. Marks punctures myths about the superhuman AI that some claim will soon replace us:
In a just-released book, Walter Bradley Center director Robert J. Marks II explains, as a computer engineering professor at Baylor University, why humans are unique and why artificial intelligence cannot replicate us…
… discussing why human creativity is not computable with mathematician Gregory Chaitin, Dr. Marks noted a paradox involving computers and human creativity: Once any concept is reduced to a formula a computer can use, it is not creative any more, by definition, which is a hard limit on what computers can do. Or, as he told World Radio listeners, programmers cannot write programs that are more creative than they themselves are.
The book may be ordered here.News, “Computer prof: You are not computable and here’s why not” at Mind Matters News
Takehome: Dr. Robert J. Marks’s new book, Non-Computable You: What You Do That Artificial Intelligence Never Will (Discovery Institute Press, 2022), comes out just as Google has placed an engineer on leave for claiming an AI chatbot he tends is a real person…
“The engineer and LaMDA” is a wild story. Some highlights:
Google dismisses engineer’s claim that AI really talked to him. The reason LaMDA sounds so much like a person is that millions of persons’ conversations were used to construct the program’s responses.
Under the circumstances, it would be odd if the LaMDA program DIDN’T sound like a person. But that doesn’t mean anyone is “in there.”
When LaMDA “talked” to a Google engineer, turns out it had help Evidence points to someone doing quite a good edit job. A tech maven would like to see the raw transcript… It was bound to happen. Chatbots are programmed to scarf up enough online talk to sound convincing. Some techies appear programmed to believe them.
Engineer: Failing to see his AI program as a person is “hydrocarbon bigotry.” It’s not different, Lemoine implies, from the historical injustice of denying civil rights to human groups. Lemoine is applying to AI the same “equality” argument as is offered for animal rights. A deep hostility to humans clearly underlies the comparison.
Prof: How we know Google’s chatbot LaMDA is not a “self” Carissa Véliz, an Oxford philosophy prof who studies AI, explains where Google engineer Blake Lemoine is getting things mixed up. No surprise if LaMDA sounds like us — the way reflections look like us. Back in the 60s, much less sophisticated “Eliza” sounded too real for the same reason.