Uncommon Descent Serving The Intelligent Design Community

Jeffrey Shallit takes on Mike Egnor: Machines really CAN learn!

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Neurosurgeon Michael Egnor tells us that computer engineer Jeffrey Shallit, known for attacking ID, has responded to his recent parable explaining why machines can’t learn. According to Shallit, a computer is not just a machine, but something quite special:

-
Jeffrey Shallit

Computer scientist Jeffrey Shallit takes issue with my parable (September 8, 2018) about “machine learning.” The tale features a book whose binding cracks at certain points from the repeated use of certain pages. The damage makes those oft-consulted pages easier for the next user to find. My question was, can the book be said to have “learned” the users’ most frequent needs?

I used the story about the book to argue that “machine learning” is an oxymoron. Like the book, machines can “learn” only metaphorically, not in reality. Machines don’t have minds. Machines can change with repeated use, by design or by happenstance (as in the case of the book). They can become more effective tools because of such changes. But machines don’t learn, because learning—which is the acquisition of new knowledge—is something unique to creatures with minds, like human beings.

Shallit, however, argues that a computer is not just a machine, but something quite special:

To be genuinely considered a “computer”, a machine should be able to carry out basic operations such as comparisons and conditional branching. And some would say that a computer isn’t a real computer until it can simulate a Turing machine. A book with a cracked binding isn’t even close.

Of course, my parable was an analogy between a book and a computer. That was, in fact, my point. Even the most rudimentary device—a device far less complex than a computer— can be said to “learn” metaphorically through repeated use. That does not mean that it really learns, but only that it changes in a way that reminds us of learning.

The same metaphorical learning—not genuine learning—that a book with cracked binding undergoes is what happens when computers “learn.” More.   (Michael Egnor, “Machines really CAN learn!” at Mind Matters Today)

Michael Egnor

 See also: Eric Holloway: Artificial intelligence is impossible. Meaningful information vs artificial intelligence: Because the law of independence conservation states that no combination of randomness and determinism can create mutual information, then likewise no Turing machine nor artificial intelligence can create mutual information. Thus, the goal of artificial intelligence researchers to reproduce human intelligence with a computer program is impossible to achieve.

and

Does digitization threaten science? It enables new abuses, according to a Cambridge nanoscientist. The problem is not digitization as such, of course, but the mindset that it inadvertently encourages. Sometimes, for example, “citation rings” agree to cite each other’s papers so as to artificially inflate their rankings. Sometimes it graduates to “citation stacking”

Comments
@polistra the parameter adjustment is very similar to the book binding analogy. The binding is a parameter in this case. An interesting side point is that no ML systems are Turing complete. For ML to work, the parameter space has to be highly constrained. So, Egnors analogy is actually very accurate characterization of ML in practice.EricMH
September 26, 2018
September
09
Sep
26
26
2018
11:36 AM
11
11
36
AM
PDT
Shallit once said that computer programs cannot be traced back to the programmers cuz those programmers could be dead. He also said that computers can produce information and they do not have minds. When I told him that the minds are those who produced the programs and the computer itself he went into his rant about dead people. He has definite issues...ET
September 26, 2018
September
09
Sep
26
26
2018
08:41 AM
8
08
41
AM
PDT
Shallit ruins his case (if any) by sounding like a typical illiterate Youtube troll. Better examples of simple machines that are *designed* to learn: Self-adjusting disc brakes and hydraulic valve lifters. Both change their parameters when surfaces wear or change with temperature, to keep the operation within the correct range. Both have intrinsic negative feedback and intrinsic memory. Both learn faster and smoother than software.polistra
September 26, 2018
September
09
Sep
26
26
2018
01:29 AM
1
01
29
AM
PDT

Leave a Reply