The Turing test, and the Lovelace test, are attempts to determine if computers can show human-like intelligence. Holloway asks, what happens if researchers succeed in creating lifelike machines? in the sense of “wanting” things? “If we create an all-powerful artificial intelligence, we cannot assume it will be friendly. Thus, we need a Terminator test.”
The next iterations of science fraud will employ machine learning trained on enough of the internet to avoid obvious goofs. We will need better, more sophisticated methods.
This is just a note for record on what monism is (as opposed to dualism, Creation by a Supreme and maximally great and good being, etc). A useful point of departure is a diagram from Wikipedia on dualism (and they give only one type) vs monism: Wikipedia notes, next to this: Different types of monism Read More…
The most likely reason one can think of for the persistence of computer-generated gibberish in the science database is that many other papers sound like that — but are in fact authentic human creations — so no one really wants to go there.
Let’s see what happens in China. This could be an important test of human exceptionalism.
Angus Menuge: I don’t see any reason from these amazing enhancements of the complexity of these [computer] systems to think that the systems would move from not having subjective awareness to having it or from moving to true intentionality about anything beyond themselves.
The problems of replicating oneself are addressed in a funny sci-fi short on human selfhood: For one thing, the replicant doesn’t know that he is not the original. He has no reason to think so.
Berger: “Man is an imperfect image of God, as we all regularly demonstrate. Some images are more degraded than others. Similarly, any image man creates of himself will be a less than perfect image of himself. Hence, man can never make AI that is in the image of God, He can only make a degraded image of himself. “
Computers require complete data to come to a correct conclusion but humans often work very well with incomplete data.
Dembski continues to reflect on Erik J. Larson’s new book, The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do (2021). He recalls his experiences learning to write boilerplate for a psychology chatbot back in 1982.
Larson explains what he hopes to convey to the reader about the limitations of Really Big Computers.
Dembski: in the cossetted and sanitized environments that we have constructed for ourselves in the U.S., have no clue of what capabilities AI actually needs to achieve to truly match what humans can do. The shortfall facing AI is extreme.
Takehome: Horgan finds that, despite the enormous advances in neuroscience, genetics, cognitive science, and AI, our minds remain “as mysterious as ever.”
Abductive reasoning is part of design theory. Interesting that computers can’t do it…
Dembski: “At the end of the discussion, however, Kurzweil’s overweening confidence in the glowing prospects for strong AI’s future were undiminished. And indeed, they remain undiminished to this day (I last saw Kurzweil at a Seattle tech conference in 2019 — age seemed to have mellowed his person but not his views).” But Larson says it’s all nonsense.