Neurosurgeon Michael Egnor tells us that computer engineer Jeffrey Shallit, known for attacking ID, has responded to his recent parable explaining why machines can’t learn. According to Shallit, a computer is not just a machine, but something quite special:
Computer scientist Jeffrey Shallit takes issue with my parable (September 8, 2018) about “machine learning.” The tale features a book whose binding cracks at certain points from the repeated use of certain pages. The damage makes those oft-consulted pages easier for the next user to find. My question was, can the book be said to have “learned” the users’ most frequent needs?
I used the story about the book to argue that “machine learning” is an oxymoron. Like the book, machines can “learn” only metaphorically, not in reality. Machines don’t have minds. Machines can change with repeated use, by design or by happenstance (as in the case of the book). They can become more effective tools because of such changes. But machines don’t learn, because learning—which is the acquisition of new knowledge—is something unique to creatures with minds, like human beings.
Shallit, however, argues that a computer is not just a machine, but something quite special:
To be genuinely considered a “computer”, a machine should be able to carry out basic operations such as comparisons and conditional branching. And some would say that a computer isn’t a real computer until it can simulate a Turing machine. A book with a cracked binding isn’t even close.
Of course, my parable was an analogy between a book and a computer. That was, in fact, my point. Even the most rudimentary device—a device far less complex than a computer— can be said to “learn” metaphorically through repeated use. That does not mean that it really learns, but only that it changes in a way that reminds us of learning.
The same metaphorical learning—not genuine learning—that a book with cracked binding undergoes is what happens when computers “learn.” More. (Michael Egnor, “Machines really CAN learn!” at Mind Matters Today)
See also: Eric Holloway: Artificial intelligence is impossible. Meaningful information vs artificial intelligence: Because the law of independence conservation states that no combination of randomness and determinism can create mutual information, then likewise no Turing machine nor artificial intelligence can create mutual information. Thus, the goal of artificial intelligence researchers to reproduce human intelligence with a computer program is impossible to achieve.
Does digitization threaten science? It enables new abuses, according to a Cambridge nanoscientist. The problem is not digitization as such, of course, but the mindset that it inadvertently encourages. Sometimes, for example, “citation rings” agree to cite each other’s papers so as to artificially inflate their rankings. Sometimes it graduates to “citation stacking”