Uncommon Descent Serving The Intelligent Design Community
Category

Artificial Intelligence

Neurosurgeon: Neither books nor brains learn, only minds learn

Recently, neurosurgeon Michael Egnor offered a parable about whether machines really learn. The tale features a book that “learned” to fall open at the right places. Computer scientist Jeffrey Shallit responded, claiming that machines really CAN learn!, and Dr. Egnor responded to him, pointing out that a baseball glove can “learn” the game if adjustment to circumstances is all we are counting. But he also wanted to make clear to Dr. Shallit, brains don’t learn either. Only minds learn: Shallit implies that the reinforcement and suppression of neural networks in the brain that accompanies learning means that brains, like machines, learn. He is mistaken. Brains are material organs that contain neurons and glia a host of cells and substances. Brains Read More ›

Google is doomed because it doesn’t get information theory

That’s tech philosopher George Gilder’s view: Last month, World News Daily did a three-part interview with George Gilder on the publication of Life after Google: The Fall of Big Data and the Rise of the Blockchain Economy, which unpacks some of the book’s main ideas: Part One: “Reagan guru, predictor of iPhone foresees new web revolution” In 1981, his bestselling “Wealth and Poverty” provided a blueprint for the economic revolution led by Ronald Reagan, who cited him more than any other living author. In the 1994 version of his book “Life After Television,” he predicted the digital world in which we now live and the invention of the smartphone that now dominates daily life. And long before the iPhone was introduced Read More ›

Jeffrey Shallit takes on Mike Egnor: Machines really CAN learn!

Neurosurgeon Michael Egnor tells us that computer engineer Jeffrey Shallit, known for attacking ID, has responded to his recent parable explaining why machines can’t learn. According to Shallit, a computer is not just a machine, but something quite special: Computer scientist Jeffrey Shallit takes issue with my parable (September 8, 2018) about “machine learning.” The tale features a book whose binding cracks at certain points from the repeated use of certain pages. The damage makes those oft-consulted pages easier for the next user to find. My question was, can the book be said to have “learned” the users’ most frequent needs? I used the story about the book to argue that “machine learning” is an oxymoron. Like the book, machines Read More ›

Computer engineer Eric Holloway: Artificial intelligence is impossible

Holloway distinguishes between meaningful information and artificial intelligence: What is meaningful information, and how does it relate to the artificial intelligence question? First, let’s start with Claude Shannon’s definition of information. Shannon (1916–2001), a mathematician and computer scientist, stated that an event’s information content is the negative logarithm* of its probability. So, if I flip a coin, I generate 1 bit of information, according to his theory. The coin came down heads or tails. That’s all the information it provides. However, Shannon’s definition of information does not capture our intuition of information. Suppose I paid money to learn a lot of information at a lecture and the lecturer spent the whole session flipping a coin and calling out the result. Read More ›

Why computer programs that mimic the human brain will continue to underperform

Our physics color commentator Rob Sheldon offers a comment on whether simple probabilities can outweigh “deep learning” (as noted earlier here. ) When neural nets [computer programs that mimic the human brain] were all the rage in physics, some 25 years ago, I spoke with the author of a paper who was using neural nets to predict space weather. After a year of playing with predictive abilities of various 1-level, 2-level and higher node nets, he confided that they reached a certain level of ability and then failed to improve. What made them better, he told me, was having more physics inserted into the model. That is, the nets couldn’t recreate Newton’s Laws, and if presented with just raw data, Read More ›

Researcher: A “chemical brain” will solve the hard problem of consciousness

Because silicon can’t, says chemist: WHEN Lee Cronin was 9 he was given a Sinclair ZX81 computer and a chemistry set. Unlike most children, Cronin imagined how great it would be if the two things could be combined to make a programmable chemical computer. Now 45 and the Regius Chair of Chemistry at the University of Glasgow, Cronin leads a research team of more than 50 people, but his childhood obsessions remain. He is constructing chemical brains, and has ambitions to create artificial life – using a radical new approach. Rowan Hooper, “Why creating a chemical brain will be how we understand consciousness” at New Scientist (paywall) The problem with consciousness is not that we don’t understand how it originates but that Read More ›

Making intelligent machines persons hits a few snags

Earlier this year, over 150 experts in AI, robotics, ethics, and supporting disciplines signed an open letter denouncing the European Parliament’s proposal to make intelligent machines persons. According to Canadian futurist George Dvorsky, the Parliament’s purpose is to hold the machines liable for damages, as a corporation might be: “The EU is understandably worried that the actions of these machines will be increasingly incomprehensible to the puny humans who manufacture and use them.” AI experts acknowledge that no such robots currently exist. But many argue, as does Seth Baum of the Global Catastrophic Risk Institute, “Now is the time to debate these issues, not to make final decisions.” AI philosopher Michael LaBossiere likewise wants to “try to avoid our usual Read More ›

Johnny Bartlett: Bitcoin and the social value of trust

It is very interesting to study a technology that doesn’t rely on trust. However, in the end, the most interesting thing it tells us is not how we should build a network but rather the social value of trust in society. More than economic power, more than scientific advances, trust is really what builds wealth in a society. When you can trust your neighbor not to steal, not to lie, not to try to ruin you, the increases in efficiency are gigantic. In the comparison between Bitcoin and the Visa network, the performance gain in efficiency of trust vs. lack of trust is 400,000x. My hat is off to Bitcoin. Not only for developing an interesting technology, but also for Read More ›

AI and pop music: Can simple probabilities outperform deep learning?

Haebichan Jung tells us that he built an original pop music-making machine “that could rival deep learning but with simpler solutions.” Deep learning “is a subfield of machine learning concerned with algorithms inspired by the structure and function of the brain called artificial neural networks.” (Jason Brownlee, Machine Learning Mastery) Jung tells us that he went to considerable trouble to develop deep learning methods for generating machine pop music but in the end… I made a simple probabilistic model that generates pop music… Eric Holloway notes that this is an expected outcome based on the fact that computers cannot generate mutual information, where two variables are dependent on each other. Can simple probabilities outperform deep learning?” at Mind Matters Today Read More ›

How do emotional robots “care”?

Well, they don’t, exactly, but here’s the deal:   From Sapiens, a journal of Anthropology/Everything Human: Pepper is a white, semi-humanoid robot, about the size of a 6-year-old, made by Tokyo-based SoftBank Robotics. You may have seen him working in a bank or a hotel, or being interviewed by Neil deGrasse Tyson. According to the company, Pepper was designed “to be a genuine day-to-day companion whose number one quality is his ability to perceive emotions.” Pepper uses cameras and sensors to detect a person’s facial expression, tone of voice, body movements, and gaze, and the robot reacts to those—it can talk, gesture, and even dance on wheels. How does Pepper care? Pepper and other emotional robots are particularly designed, for Read More ›

Can machines really learn? Neurosurgeon Michael Egnor offers a parable

At Mind Matters Today: “Machine learning” is a hot field, and tremendous strides are being made in programming machines to improve as they work. Such machines work toward a goal, in a way that appears autonomous and seems eerily like human learning. But can machines really learn? What happens during machine learning, and is it the same thing as human learning? Because the algorithms that generate machine learning are complex, what is really happening during the “learning” process is obscured both by the inherent complexity of the subject and the technical jargon of computer science. Thus it is useful to consider the principles that underlie machine learning in a simplified way to see what we really mean by such “learning.” Read More ›

Could AI understand the universe better than we do?

  Better than we ever could? Recently, we discussed well-known chemist and atheist proponent Peter Atkins’s claim that science, not philosophy, answers the Big Questions: One class consists of invented questions that are often based on unwarranted extrapolations of human experience. They typically include questions of purpose and worries about the annihilation of the self, such as Why are we here? and What are the attributes of the soul? They are not real questions, because they are not based on evidence. Thus, as there is no evidence for the Universe having a purpose, there is no point in trying to establish its purpose or to explore the consequences of that purported purpose. As there is no evidence for the existence of Read More ›

Could HAL 9000 ever be built? Robert Marks thinks so

But could the psychotic computer ever be conscious? That’s another story. Marks, an author of Introduction to Evolutionary Informatics, weighs in, on the 50th anniversary of 2001: A Space Odyssey. At one point on the trip from Earth to Jupiter, HAL becomes suspicious that the crew might be sabotaging the mission. HAL then purposely tries to kill all the crew. The most logical explanation for this act is a coding error. HAL was programmed to operate on the basis that the mission took priority over human life. By contrast, science fiction writer Isaac Asimov did not allow his AI to kill. … More. See also: Screenwriters’ jobs are not threatened by AI (Robert J.Marks) AI That Can Read Minds? Deconstructing AI Read More ›

Daniel Dennett thinks a game can show that computers could really think

Fr. Robert Verrill, OP, takes different view: In his paper “Real Patterns,” Tufts University philosopher Daniel Dennett writes the following: In my opinion, every philosophy student should be held responsible for an intimate acquaintance with the Game of Life. It should be considered an essential tool in every thought-experimenter’s kit, a prodigiously versatile generator of philosophically important examples and thought experiments of admirable clarity and vividness. One of the reasons why Dennett likes the Game of Life is because he thinks it can help us understand how computers could be genuinely intelligent. Now I do think the Game of Life provides us with some interesting thought experiments, but precisely for the opposite reason to Dennett: the Game of Life simulation Read More ›

Why it’s hard to model robots on human behavior

As robotics engineer Ken Goldberg explains, What has working with robots taught you about being human? It has taught me to have a huge appreciation for the nuances of human behavior and the inconsistencies of humans. There are so many aspects of human unpredictability that we don’t have a model for. When you watch a ballet or a dance or see a great athlete and realize the amazing abilities, you start to appreciate those things that are uniquely human. The ability to have an emotional response, to be compelling, to be able to pick up on subtle emotional signals from others, those are all things that we haven’t made any progress on with robots. What’s the most creative thing a Read More ›