Uncommon Descent Serving The Intelligent Design Community
Category

Engineering

Jeff Shallit — leveling the charge of incompetence incompetently

Jeff Shallit charges Jonathan Wells with incompetence for claiming that duplicating a gene does not increase the available genetic information. To justify this charge, Shallit notes that a symbol string X has strictly less Kolmogorov information than the symbol string XX. Shallit, as a computational number theorist, seems stuck on a single definition of information. Fine, Kolmogorov’s theory implies that duplication leads to a (slight) increase in information. But there are lots and lots of other definitions of information out there. There’s Fisher information. There’s Shannon information. There’s Jack Szostak’s functional information. Information, when quantified, typically takes the form of a complexity measure. Seth Lloyd has catalogued numerous different types of complexity measures used by mathematicians, engineers, and scientists. Here Read More ›

Diffusion Entropic Analysis to model natural complex time series vs CSI

Nicola Scafetta has demonstrated that Diffusion Entropic Analysis can identify physical phenomena underlying complex time series, including non-Gaussian Levy and other series. This appears an important development in detecting complex physical phenomena resulting in time series measurements.

Scafetta’s work promises to be important in detecting and distinguishing Complex Specified Information from natural complex phenomena. e.g. for Jill Tarter of SETI to detect and distinguish extra terrestrial communications from complex natural phenomena. Read More ›

AI, Materialist Dodgeball and a Place at the Table

Ari N. Schulman, “Why Minds Are Not Like Computers,” The New Atlantis, Number 23, Winter 2009, pp. 46-68.
Article Review

“The problem, therefore, is not merely that science is being used illegitimately to promote a materialistic worldview, but that this worldview is actively undermining scientific inquiry.”—UncommonDescent

Read the entire article here.

Unless otherwise noted, all quotations from the article, “Why Minds Are Not Like Computers,” are italicized.

Mr. Schulman walks the tightrope of analysis and criticism, describing how a materialistic worldview actively undermines scientific inquiry in the area of Artificial Intelligence (AI). Analysis and (self-criticism) should be part of all scientific endeavor; the strict materialist does no such thing; instead, he plays dodgeball.

Much of the article, especially the discussions of the brain, computers, Turing Machines, the Turing Test, and the Chinese Room Problem were all helpful in understanding the state of affairs in AI for the layman. My comments are those of a such a layman, included that you might see what a layman might take from such an article. Never-the-less, questions remain . . .

Read More ›

ID and the Science of God: Part I

In response to an earlier post of mine, DaveScot kindly pointed out this website’s definition of ID. The breadth of the definition invites scepticism: ID is defined as the science of design detection — how to recognize patterns arranged by an intelligent cause for a purpose. But is there really some single concept of ‘intelligence’ that informs designs that are generated by biological, human, and possibly even mechanical means? Why would anyone think such a thing in the first place? Yet, it is precisely this prospect that makes ID intellectually challenging – for both supporters and opponents.

It’s interesting that not everything is claimed to be intelligently designed. This keeps the phrase ‘intelligent design’ from simply collapsing into ‘design’ by implying a distinction between the intelligence and that on which it acts to produce design. So, then, what exactly is this ‘intelligence’ that stands apart from matter? Well, the most obvious answer historically is a deity who exists in at least a semi-transcendent state. But how can you get any scientific mileage from that?

Enter theodicy, which literally means (in Greek) ‘divine justice’. It is now a field much reduced from its late 17th century heyday. Theodicy exists today as a boutique topic in philosophy and theology, where it’s limited to asking how God could allow so much evil and suffering in the world. But originally the question was expressed much more broadly to encompass issues that are nowadays more naturally taken up by economics, engineering and systems science – and the areas of biology influenced by them: How does the deity optimise, given what it’s trying to achieve (i.e. ideas) and what it’s got to work with (i.e. matter)? This broader version moves into ID territory, a point that has not escaped the notice of theologians who nowadays talk about theodicy.

Read More ›

Mathematics and Darwinism — Plus a Math Problem to Solve

Over at Telic Thoughts Bradford resurrected a discussion based on my UD essay, Writing Computer Programs by Random Mutation and Natural Selection. In reference to the quote, “The set of truly functional novel situations is so small in comparison with the total possible number of situations that they will never occur, which is the point of the original post,” I commented as follows:

That was the main point of my essay, that combinatorics produce such huge numbers so quickly and totally swamp islands of function. My 66-character program, assuming only the 26 lower-case letters, produces 2.4 x 10^93 possible outcomes, or the number of subatomic particles in 10 trillion universes.

In fact, the C programming language is case sensitive and uses all 92 characters on a standard keyboard, which produces 4 x 10^129 possible combinations in a 66-character program, or the number of subatomic particles in 10,000,000,000,000,000,000,000,000,
000,000,000,000,000,000,000,000 universes.

Evolutionary biologists put blind faith in chance and necessity and arbitrarily invoke “deep time” to make the impossible imaginarily possible. The problem is that deep time is not actually all that deep. There are only about 10^17 seconds in five billion years.

Hard numbers put things in perspective. The probabilities are not a close call; they are catastrophically lopsided.

Read More ›

Thoughts on Parameterized vs. Open-Ended Evolution and the Production of Variability

Many of the advocates of neo-Darwinism argue that abilities of evolution is obvious. The idea is that, given variability in a population, selection and/or environmental change will cause a population to move forward in fitness. Basically, the formula is variability + overproduction + selection = evolution. The problem is that the equation hinges on "variability" and its abilities to create the kinds of variations the Darwinists need. Read More ›

Biological Neg-Entropy

Some of you might have heard that Jonathan Schaeffer and his team at the U. of Alberta recently solved the game of checkers. It made big news in the computer science world.

I first met Jon at the First Computer Olympiad in London (organized by the famous David Levy of chess and computer-chess fame) at which Jon’s program won the gold medal and mine won the silver.

Jon and his team eventually computed the eight-piece endgame database for checkers, and later my colleague Ed Trice and I computed it as well. Jon and I compared results, and it turned out that his database had errors that had evaded his error-detection scheme. This scheme produced internally consistent results, despite the errors. Later, Jon detected errors in my database, which were traced back to a scratch on a CD that evaded my error-detection scheme.

All the errors were eventually traced to data transfer anomalies and not the generative computational algorithms, so CRC (cyclic redundancy check) methods were used to solve the problem.
Read More ›

Bogus Computer Simulations

This one, published by New Scientist, really takes the cake. From the article:

God may work in mysterious ways, but a simple computer program may explain how religion evolved.

By distilling religious belief into a genetic predisposition to pass along unverifiable information, the program predicts that religion will flourish… The model assumes… that a small number of people have a genetic predisposition to communicate unverifiable information to others. They passed on that trait to their children…

The model looks at the reproductive success of the two sorts of people — those who pass on real information, and those who pass on unreal information.

It would be a colossal understatement to call this utter silliness, and it stuns me that anyone would take this seriously, much less allow it to be published as a “scientific” study by “The World’s No.1 Science and Technology News Service.”
Read More ›

Gambler’s ruin is Darwin’s ruin

The same day I first watched “Expelled” in theaters, I also watched the movie “21”. The movie “21” is based on the true story of MIT students who made a fortune in Las Vegas casinos through the use of mathematics.

The real story behind the movie began with an associate of Claude Shannon by the name of Dr. Edward O. Thorp of MIT. In the Early 60’s, Thorp published a landmark mathematical treatise on how to beat casinos. His research was so successful that Las Vegas casinos shut down many of their card tables for an entire year until they could devise counter measures to impede Thorp’s mathematics.

Thorp is arguably the greatest gambler of all time. He extended his gambling science to the stock market and made a fortune. His net worth is in the fractional to low billions. He is credited with some independent discoveries which were the foundation to the Black-Scholes-Merton equation relating heat transfer thermodynamics to stock option pricing. The equation won the Nobel prize and was the subject of the documentary: The Trillion Dollar Bet.

Thorp would probably be even richer today if Rudy Gulliani had not falsely implicated him in the racketeering scandal involving Michael Milken. Thorp, by the way, keeps a dartboard with Gulliani’s picture on it… 🙂

The relevance of Thorp’s math to Darwinism is that Thorp was a pioneer of risk management (which he used to create the world’s first hedge fund). In managing a hedge fund or managing the wagers in casinos, one is confronted with the mathematically defined problem of Gambler’s Ruin. The science of risk management allows a risk manager or a skilled gambler to defend against the perils gamblers ruin. Unfortunately for Darwinism, natural selection has little defense against the perils of gambler’s ruin.
Read More ›

Can Computation and Computational Algorithms Produce Novel Information?

As some UD readers are aware, one of my interests is artificial-intelligence computer programming, especially games-playing AI (here, here, and here).

In producing retrograde endgame databases for the game of checkers, with massive computational resources (two CPUs performing approximately a billion integer operations each per second over a period of two months, for a total of 10,000,000,000,000,000 [ten thousand trillion] mathematical calculations), some very interesting results were produced, including correction of human play that had been in the books for centuries. But did the program produce any new information? Well, yes, in a sense, because the computer found stuff that no human had ever found. But here’s the real question, which those of us at the Evolutionary Informatics Lab are attempting to address: Was the “new information” supplied by the programmer and his intelligently designed computational algorithm, or did the computer really do anything original on its own, in terms of information generation?

The answer is that computers do not generate new information; they only reshuffle it and make it more easily accessible. Here’s an example:

Read More ›

Are Those Without Formal Academic Training in Evolutionary Biology Justified in Challenging the “Experts”?

This is a recurring challenge that most recently reared its head in a comment concerning my essay, Why Mathematicians, Computer Scientists, and Engineers Tend to be More Skeptical of Darwinian Claims.

The argument goes like this (as presented by the commenter in the link provided above):

The majority of degreed computer scientists, engineers, and mathematicians have completed no college course work in the life sciences. Virtually all have college physics under their belts. Some studied chemistry in college. Relatively few enrolled in college courses in biology.

Among “expert” critics of scholarly fields not their own, at most one in a thousand makes a substantive contribution. If UD should happen to be chock-full of engineers, computer scientists, and mathematicians who have all caught life scientists in fundamental error, then it would constitute a singular event in the history of science.

If UD readers promise not to tell anyone, I’ll disclose a secret about my college academic training.

Read More ›

Why Mathematicians, Computer Scientists, and Engineers Tend to be More Skeptical of Darwinian Claims

Larry Moran’s presentation in a comment in Granville Sewell’s UD post, I found not particularly persuasive, for the following reasons. I’m not interested in definitions of science; I’m interested in how stuff actually works. I’m perfectly amenable to being convinced that the complexity, information content, and machinery of living systems can be explained by stochastic processes filtered by natural selection, and I would not even demand hard evidence, just some rigorous argumentation based on the following:
Read More ›