Uncommon Descent Serving The Intelligent Design Community
Category

Mind

Feser (and Ross) on the immateriality of the mind

Edward Feser has presented a lecture on the immateriality of the mind, which is worth listening to: The papers here and here will flesh out details. The core logic of the argument pivots on the principle of distinct identity, turned to how distinguishable entities are inherently different. Syllogistically: 1: Formal thought processes can have an exact or unambiguous conceptual content. However, 2: Nothing material can have an exact or unambiguous conceptual content. So, 3: Formal thought processes are not material. Worth pondering as we reflect on this season. Enjoy the Christmas season. END

2018 AI Hype Countdown 8: AI Just Needs a Bigger Truck!

AI help, not hype, with Robert J. Marks: Can we create superintelligent computers just by adding more computing power? The claim that AI can be written to evolve even smarter AI is slowly being abandoned. AI software pioneer François Chollet, for example, concluded in “The Impossibility of Intelligence Explosion” that the search should be abandoned: “An overwhelming amount of evidence points to this simple fact: a single human brain, on its own, is not capable of designing a greater intelligence than itself.” A computer cannot do that either. Some think computers could greatly exceed human intelligence if only we added more computing power. That reminds me of an old story… More. See also: 2018 AI Hype Countdown 9: Will That Read More ›

9: Will That Army Robot Squid Ever Be “Self-Aware”?

From the 2018 AI Hype Countdown at Mind Matters #9: AI help, not hype, with Robert J. Marks: What would it take for a robot to be self-aware? The Army Times headline would jolt your morning coffee: Army researchers are developing a self-aware squid-like robot you can 3D print in the field Reporter Todd South helpfully adds, “your next nightmare.” The thrill of fear invites the reader to accept the metaphorical claim that the robot will be “self-aware” as a literal fact. Although we could, for technical reasons, quibble with the claim that the robot squid will be printed in 3D, we won’t just now. Let’s focus instead on the seductive semantics of the term “self-aware.” For humans, Oxford tells Read More ›

New journal: The human mind from a computer science perspective

The Blyth Institute’s new journal will offer a focus on artificial intelligence and philosophy as well as philosophical questions in mathematics and engineering The Blyth Institute, a think tank that explores the relationships between biology, cognitive science, and engineering, has launched a new journal, Communications of the Blyth Institute with Eric Holloway as Managing Editor and Jonathan Bartlett as Associate Editor. Communications is intended as a discussion forum for fresh ideas in a variety of areas, including philosophy of mind as seen from a computer science perspective. It is open to ID-friendly ideas. The inaugural issue covers such topics as Eric Holloway, Creativity and Machines, 13 Jonathan Bartlett, Simplifying and Refactoring Introductory Calculus, 17 T. M. Koch, Recategorizing the Human Read More ›

Jonathan Bartlett: The First Law of Automation

is Think!: The worst trap that people who are pursuing automation fall into is the desire to automate everything. That’s usually a road to disaster. Automation is supposed to save time and money, but it can wind up costing you both if you don’t carefully consider what you automate. How Automation Goes Wrong Elon Musk found this out the hard way. His original dream called for the Model 3 to be built almost entirely by robots. He believed that automation would increase the speed and decrease the costs of his production line. However, as GM found out in the 1980s, when an automated line goes wrong, you wind up automating failure instead of success. Apart from the fact that the Read More ›

Proven: If you torture a Big Data enough, it will confess to anything

In his fascinating new book The AI Delusion, economics professor Gary Smith reminds us that computers don’t have common sense. He also notes that, as data gets larger and larger, nonsensical coincidences become more probable, not less. Read More ›

What makes otherwise intelligent people believe in an AI apocalypse?

Stephen Hawking was hardly the only one: Along with Sir Martin Rees, Elon Musk, and Henry Kissinger, among many lesser knowns, the late Stephen Hawking worried about an AI apocalypse (the “worst event in the history of our civilization”). Otherwise very bright people don’t seem to have a grasp of the underlying situation. Let’s take just two examples: 1. What would we need to make machines “intelligent”? We don’t even understand animal intelligence clearly. Are seals really smarter than dogs? Plants can communicate to adjust to their circumstances without a mind or brain. Where does that place plants with respect to intelligence? And what about the importance of the brain? Humans with seriously compromised brains can have consciousness. News, “Stephen Read More ›

Jonathan Bartlett: AI and the Future of Murder

He wonders: If I kill you but upload your mind into an android, did I murder you or just modify you? Is it even possible to upload your consciousness to a computer and, if so, is it still really you? The sci-fi TV series Agents of S.H.I.E.L.D. (2013– ) tackled this question in an episode titled “Self Control”.  Scientist Holden Radcliffe has an android assistant appropriately named Aida (Artificial Intelligence Digital Assistant). Together, they build a virtual world that people could be plugged into and uploaded into, called The Framework. “More.” at Mind Matters See also: McDonald’s, meet McPathogen Robert J. Marks: What happens when the drive to automate everything meets the Law of Unintended Consequences?: I have a wager with a Read More ›

Previously unknown human brain region identified

Could be unique to humans: It turns out we humans may have an extra type of thinky bit that isn’t found in other primates. A previously unknown brain structure was identified while scientists carefully imaged parts of the human brain for an upcoming atlas on brain anatomy. Neuroscientist George Paxinos and his team at Neuroscience Research Australia (NeuRA) have named their discovery the endorestiform nucleus – because it is located within (endo) the inferior cerebellar peduncle (also called the restiform body). It’s found at the base of the brain, near where the brain meets the spinal cord. This area is involved in receiving sensory and motor information from our bodies to refine our posture, balance and movements.Tessa Koumoundouros, “Neuroscientists Have Read More ›

Can AI help scientists formulate ideas?

Yes, if you mean “dumb AI,” and there ain’t no “smart AI”: Quantity is definitely a solved problem. STM, the “voice of scholarly publishing” estimated in 2015 that roughly 2.5 million science papers are published each year. Some are, admittedly, in predatory or fake journals. But over 2800 journals are assumed to be genuine. From all this, we can deduce that most scientists have not read most of the literature in their field, though they probably read immediately relevant or ground-breaking findings. But the question has arisen whether, in some cases, scientists have even read papers in which they are listed as authors. A report in Nature (September 2018) revealed that “Thousands of scientists publish a paper every five days” Read More ›

How to falsify reductionism with complex specified information

 A philosopher claims that neuroscience has proven thoughts do not exist. Eric Holloway looks at the neuroscience and examines the claim: There is a problem with this sort of reasoning. One could make the same argument about computer code, as follows: There is no code. It’s all just assembly language. Or, there is no assembly, it’s all just machine code. Or, there is no machine code, there are just voltage levels on transistors. One could continue following this chain of reasoning to the point where the transistors don’t exist. It’s just a bunch of electrons doing their thing. Of course, the electrons don’t really exist either. They’re just a bunch of quarks and leptons. In which case, the program your computer requires Read More ›

Robert Marks Talks Computers with Michael Medved

Robert J. Marks is one of the authors of Introduction to Evolutionary Informatics, with design theorist William Dembski and Winston Ewert. There’s little danger, he thinks, in computers ruling us but considerable danger that we can use them to magnify the impact of our errors. More. Here’s the podcast. See also: Human consciousness may not be computable One model of consciousness would mean that conscious computers are a physical impossibility. (Robert Marks)

Biologic Institute’s Brendan Dixon asks, could AI Winter be looming?

Artificial intelligence crashes are historically common: First, what caused previous AI winters? There was one straightforward reason: The technology did not work. Expert systems weren’t experts. Language translators failed to translate. Even Watson, after winning Jeopardy, failed to provide useful answers in the real-world context of medicine. When technology fails, winters come. Nearly all of AI’s recent gains have been realized due to massive increases in data and computing power that enable old algorithms to suddenly become useful. For example, researchers first conceived neural networks—the core idea powering much machine learning and AI’s notable advances—in the late 1950s. The worries of an impending winter arise because we’re approaching the limits of what massive data combined with hordes of computers can Read More ›