Uncommon Descent Serving The Intelligent Design Community

Jeffrey Shallit

Casey Luskin on what ID is and how we should defend it

Luskin: Something is specified if it matches an independent pattern. There is no special, independent pattern to the shape of Mount Rainier. Its complexity alone is not enough to infer design. It matches a pattern — the faces of four famous Presidents. Read More ›

Michael Egnor: Jeffrey Shallit, a computer scientist, doesn’t know how computers work

Egnor: It’s remarkable that Dr. Shallit—a professor of computer science—doesn’t understand computation. Materialism is a kind of intellectual disability that afflicts even the well-educated. To put it simply, machines don’t and can’t think. Dr. Shallit’s wristwatch doesn’t know what time it is. Read More ›

Do Jeffrey Shallit’s writings offer more information than a blank page?

Michael Egnor wonders whether that’s true. But he faces the difficulty of convincing anti-ID mathematician Jeffrey Shallit, that HE, at least, ought to think they do. Read More ›

Jeffrey Shallit also holds forth on Yale’s David Gelernter

Shallit: "Gelernter is not a biologist and (to the best of my knowledge) has no advanced formal training in biology." We weren;t aware, at UD, that math prof Shallit had serious biology credentials either but perhaps one can dispense with them if one supports Darwinism. Read More ›

Darwinist Jeffrey Shallit asks, why can’t creationists do math?

Referring to calculus textbook author Jonathan Bartlett, he writes, “What surprises me is that even creationists with math or related degrees often have problems with basic mathematics.” Bartlett will answer shortly. Read More ›

Jeffrey Shallit takes on Mike Egnor: Machines really CAN learn!

Neurosurgeon Michael Egnor tells us that computer engineer Jeffrey Shallit, known for attacking ID, has responded to his recent parable explaining why machines can’t learn. According to Shallit, a computer is not just a machine, but something quite special: Computer scientist Jeffrey Shallit takes issue with my parable (September 8, 2018) about “machine learning.” The tale features a book whose binding cracks at certain points from the repeated use of certain pages. The damage makes those oft-consulted pages easier for the next user to find. My question was, can the book be said to have “learned” the users’ most frequent needs? I used the story about the book to argue that “machine learning” is an oxymoron. Like the book, machines Read More ›