Uncommon Descent Serving The Intelligent Design Community
Category

Artificial Intelligence

2018 AI Hype Countdown 5: Robert J. Marks on the claim, AI Can Fight Hate Speech!

AI help, not hype, with Robert J. Marks: AI can carry out its programmers’ biases and that’s all: Some people may be under the illusion that AI detection of hate speech will be disinterested and fair. After all, the assessment is being done by a computer, which has no ideology or political leanings. An added strength is that the program is being written by “scientists” who are never corrupted by political bias. 🙂 In reality, every computer program contains bias. Without bias, computers cannot do anything smart. This is a major theme of the book I co-authored titled Introduction to Evolutionary Informatics. The question is, what is the bias? … More. See also: 2018 AI Hype Countdown 6: AI Can Read More ›

2018 AI Hype Countdown 6: Robert J. Marks on the claim, AI Can Even Find Loopholes in the Code!

AI adopts a solution in an allowed set, maybe not the one you expected:. In the same paper in which researchers purported to find examples of AI creativity, we also read the following statement about problems with performance: “Exacerbating the issue, it is often functionally simpler for evolution to exploit loopholes in the quantitative measure than it is to achieve the actual desired outcome.” One example they offered of this type of gaming the system was a walking digital robot that moved more quickly by somersaulting than by using a normal walking gait. That was a very interesting result. But again—recognized or not — somersaults were allowed in the solution set offered by the programmer. … I was once working Read More ›

2018 AI Hype Countdown 8: AI Just Needs a Bigger Truck!

AI help, not hype, with Robert J. Marks: Can we create superintelligent computers just by adding more computing power? The claim that AI can be written to evolve even smarter AI is slowly being abandoned. AI software pioneer François Chollet, for example, concluded in “The Impossibility of Intelligence Explosion” that the search should be abandoned: “An overwhelming amount of evidence points to this simple fact: a single human brain, on its own, is not capable of designing a greater intelligence than itself.” A computer cannot do that either. Some think computers could greatly exceed human intelligence if only we added more computing power. That reminds me of an old story… More. See also: 2018 AI Hype Countdown 9: Will That Read More ›

9: Will That Army Robot Squid Ever Be “Self-Aware”?

From the 2018 AI Hype Countdown at Mind Matters #9: AI help, not hype, with Robert J. Marks: What would it take for a robot to be self-aware? The Army Times headline would jolt your morning coffee: Army researchers are developing a self-aware squid-like robot you can 3D print in the field Reporter Todd South helpfully adds, “your next nightmare.” The thrill of fear invites the reader to accept the metaphorical claim that the robot will be “self-aware” as a literal fact. Although we could, for technical reasons, quibble with the claim that the robot squid will be printed in 3D, we won’t just now. Let’s focus instead on the seductive semantics of the term “self-aware.” For humans, Oxford tells Read More ›

New journal: The human mind from a computer science perspective

The Blyth Institute’s new journal will offer a focus on artificial intelligence and philosophy as well as philosophical questions in mathematics and engineering The Blyth Institute, a think tank that explores the relationships between biology, cognitive science, and engineering, has launched a new journal, Communications of the Blyth Institute with Eric Holloway as Managing Editor and Jonathan Bartlett as Associate Editor. Communications is intended as a discussion forum for fresh ideas in a variety of areas, including philosophy of mind as seen from a computer science perspective. It is open to ID-friendly ideas. The inaugural issue covers such topics as Eric Holloway, Creativity and Machines, 13 Jonathan Bartlett, Simplifying and Refactoring Introductory Calculus, 17 T. M. Koch, Recategorizing the Human Read More ›

Jonathan Bartlett: The First Law of Automation

is Think!: The worst trap that people who are pursuing automation fall into is the desire to automate everything. That’s usually a road to disaster. Automation is supposed to save time and money, but it can wind up costing you both if you don’t carefully consider what you automate. How Automation Goes Wrong Elon Musk found this out the hard way. His original dream called for the Model 3 to be built almost entirely by robots. He believed that automation would increase the speed and decrease the costs of his production line. However, as GM found out in the 1980s, when an automated line goes wrong, you wind up automating failure instead of success. Apart from the fact that the Read More ›

Proven: If you torture a Big Data enough, it will confess to anything

In his fascinating new book The AI Delusion, economics professor Gary Smith reminds us that computers don’t have common sense. He also notes that, as data gets larger and larger, nonsensical coincidences become more probable, not less. Read More ›

What makes otherwise intelligent people believe in an AI apocalypse?

Stephen Hawking was hardly the only one: Along with Sir Martin Rees, Elon Musk, and Henry Kissinger, among many lesser knowns, the late Stephen Hawking worried about an AI apocalypse (the “worst event in the history of our civilization”). Otherwise very bright people don’t seem to have a grasp of the underlying situation. Let’s take just two examples: 1. What would we need to make machines “intelligent”? We don’t even understand animal intelligence clearly. Are seals really smarter than dogs? Plants can communicate to adjust to their circumstances without a mind or brain. Where does that place plants with respect to intelligence? And what about the importance of the brain? Humans with seriously compromised brains can have consciousness. News, “Stephen Read More ›

Jonathan Bartlett: AI and the Future of Murder

He wonders: If I kill you but upload your mind into an android, did I murder you or just modify you? Is it even possible to upload your consciousness to a computer and, if so, is it still really you? The sci-fi TV series Agents of S.H.I.E.L.D. (2013– ) tackled this question in an episode titled “Self Control”.  Scientist Holden Radcliffe has an android assistant appropriately named Aida (Artificial Intelligence Digital Assistant). Together, they build a virtual world that people could be plugged into and uploaded into, called The Framework. “More.” at Mind Matters See also: McDonald’s, meet McPathogen Robert J. Marks: What happens when the drive to automate everything meets the Law of Unintended Consequences?: I have a wager with a Read More ›

Can AI help scientists formulate ideas?

Yes, if you mean “dumb AI,” and there ain’t no “smart AI”: Quantity is definitely a solved problem. STM, the “voice of scholarly publishing” estimated in 2015 that roughly 2.5 million science papers are published each year. Some are, admittedly, in predatory or fake journals. But over 2800 journals are assumed to be genuine. From all this, we can deduce that most scientists have not read most of the literature in their field, though they probably read immediately relevant or ground-breaking findings. But the question has arisen whether, in some cases, scientists have even read papers in which they are listed as authors. A report in Nature (September 2018) revealed that “Thousands of scientists publish a paper every five days” Read More ›