Uncommon Descent Serving The Intelligent Design Community
Category

Engineering

Tale of the Transmission

It finally happened. I’ve been nursing along my car’s transmission for several months (careful driving, changing the fluids, etc.), but last week it finally failed completely, with an accompanying whump! and a jerk, and the car had to be towed to the auto repair shop. The initial hope was that a regular tear-down and cleanout, along with replacement of the wearable parts, would take care of it.  That was going to set me back about $1,500, which I wasn’t happy about but could live with.  Unfortunately, it turned out that some of what the transmission guys call “hard parts” – in this case the planetary gear assembly – were broken, so they were going to have to order a whole Read More ›

Rube Goldberg Complexity Increase in Thermodynamically Closed Systems

A thermodynamically closed system that is far from equilibrium can increase the amount of physical design provided it is either front loaded or has an intelligent agent (like a human) within it. A simple example: A human on a large nuclear powered space ship can write software and compose music or many other designs. The space ship is closed but far from equilibrium. But complexity can still increase because of the human intelligent agent. Consider then a robot whose sole purpose is to make other robots like it or even unlike it in a similarly thermodynamically closed system. It can do this provided the software is front loaded into the robot. Can the robot make something more irreducibly complex than Read More ›

Avida’s EQU in 18 instructions

The evolutionary model, Avida, is best known for evolving the EQU function. In the supplementary materials for the 2003 Nature paper, the authors presented the shortest known program to compute EQU taking 19 instructions. They note that it hasn’t been proven that it was the shortest program. In fact it is not, and I present a program that computes EQU only using 18 instructions. IO IO nop-C push pop nop-A nand push nand swap nop-C swap nand nand pop nop-C nand IO  

Help requested of readers to advance design detection in DNA

DNA Skittle was a DNA visualization program pioneered by John Sanford to help identify design features of DNA that are recognizable to the human visual system. The program is available for free, but the Skittle developers need help with ensuring it is usable through internet channels. Can you spare 10 minutes and review the product and post comments here at UD or: CEU DNA Skittle Feedback. If you want an account at CEU just post here at UD that you want an account, and I’ll have a temporary password emailed to you (your UD profile email) using your UD handle as your CEU username. Here is the first round of feedback the Skittle team needs: Are all the buttons functional Read More ›

Creationist RA Herrmann’s ID theory — the last magic on steroids!

First, an excerpt from Dr. Herrmann’s personal history: I was associated with the occult from birth, but in 1946 when I was 12 years old, I suddenly became extremely interested in occult manifestations and simultaneously became, what is sometimes called, a “mental giant” – indeed, a child scientist. I delved into any aspect of the occult that had any meaning for a child of my age. For two or three months, I was a superior telepathist. I once telepathically identified more than forty-five cards out of fifty-two cards from an ordinary deck of playing cards. However, suddenly I lost this particular telepathic ability, I lost the “key” so to speak. Obviously, I was brokenhearted over this state of affairs and Read More ›

Illustrating embedded specification and specified improbability with specially labeled coins

The reason the 500-fair-coins-heads illustration has been devastating to the materialists is due to a fact that has somewhat escaped everyone until Neil Rickert (perhaps unwittingly) pointed it out: the sides of the coin are distinguishable, but not in a way that biases the probability. This fact guarantees that chance cannot construct recognizable symbolic organization, it can only destroy it. In essence, the world of symbols (heads and tails) has become somewhat decoupled from the world of materials, and the world of specialized information (in the form of recognizable configurations like all-coins-heads) can thus transcend material causes. If the coins were perfectly symmetric and did not have any markings to let you know one side was distinguishable from the other, Read More ›

Specified Entropy — a suggested convention for discussion of ID concepts

In terms of textbook thermodynamics, a functioning Lamborghini has more thermal entropy than that same Lamborghini with its engine and other vital parts removed. But intuitively we view such a destruction of a functioning Lamborghini as an increase in entropy and not a decrease of entropy. Something about this example seems downright wrong… To fix this enigma, and to make the notion of entropy line up to our intuitions, I’m suggesting that the notion of “specified entropy” be used to describe the increase in disorganization. I derive this coined phrase from Bill Dembski’s notions of specified information. In the case of the Lamborghini getting its vital parts removed, the specified entropy goes up by exactly the amount that the specified Read More ›

Forgotten Creationist/ID Book endorsed by Nobel Prize Winner in Physics

There is a forgotten creationist book by engineer and physicist Robert Gange, PhD: Origins and Destiny that was published in 1986. It is available for free online, but for how long, I do not know. It was pioneering, and anticipated arguments that would be found in ID for the next 27 years, and likely beyond. Gange worked in the field of cyrophysics, so it is no surprise he writes with incredible insight regarding thermodynamics. His book is the only book written by a creationist that I agree with on the subject of thermodynamics, and he uses the so-called “New Generalized 2nd Law” to make his case. [the Kelvin-Plank version of the 2nd Law is a special case of the “New Read More ›

The Tragedy of Two CSIs

CSI has come to refer to two distinct and incompatible concepts. This has lead to no end of confusion and flawed argumentation. CSI, as developed by Dembski, requires the calculation of the probability of an artefact under the mechanisms actually in operation. It a measurement of how unlikely the artefact was to emerge given its context. This is the version that I’ve been defending in my recent posts. CSI, as used by others is something more along the lines of the appearance of design. Its typically along the same lines as notion of complicated developed by Richard Dawkin in The Blind Watchmaker: complicated things have some quality, specifiable in advance, that is highly unlikely to have been acquired by random Read More ›

Ranking the information content of platonic forms and ideas

Consider the following numbers and concepts (dare I say platonic forms): 1 1/2 1/9 or 0.111111….. PI PI squared The Book War and Peace by Tolstoy approximate self-replicating von-Neuman automaton (i.e. living cells) Omega Number, Chaitin’s Constant Chaitin’s Super Omega Numbers I listed the above concepts in terms of which concepts I estimate are more information rich than others, going from lower to higher. The curious thing is that even though we can’t really say exactly how many bits each concept has, we can rank the concepts in terms of estimated complexity. PI can be represented by an infinite number of digits, and thus be represented with far greater number of bits than contained in Tolstoy’s War and Peace, but Read More ›

CSI Confusion: Remember the Mechanism!

A number of posts on Uncommon Descent have discussed issues surrounding specified complexity. Unfortunately, the posts and discussions that resulted have exhibited some confusion over the nature of specified complexity. Specified complexity was developed by William Dembski and deviation from his formulation has led to much confusion and trouble. I’m picking a random number between 2 and 12. What is the probability that it will be 7? You might say it was 1 in 11, but you’d wrong because I chose that number by rolling two dice, and the probability was 1 in 6. The probability of an outcome depends on how that outcome was produced. In order to calculate a problem, you must always consider a mechanism and an Read More ›