In my first post, I discussed the importance of mechanism. In order to compute CSI you have to take into account the mechanism. Computing CSI without a mechanism is wrong. I deliberately focused on the use of specified complexity in evaluating various possible mechanisms. This is how Dembski uses CSI in his Design Inference argument. However, we are often interested in a system: a collection of artefacts and the mechanisms that operate on those artefacts. This is the context in which Dembski argues for the Law of Conservation of Information. Many of the questions that have come up are related to the context systems and who or what can generate CSI.
With a large probability, closed systems do not exhibit large increases in CSI. Small increases in CSI can be explained by “luck,” but large increases in CSI are too improbable to explain by recourse to simple “luck.” This means that any large increase in the CSI of a system must be explained by something external to that system.
Consider a wireless printer sitting in a room. If the printer begins to print out sonnets does the CSI of the room increase? The printer itself does nothing substantial to increase the probability of sonnets appearing. None of the mechanisms at work in the room explain sonnets. Thus the probability of sonnets appearing is very low and constitutes an increase in CSI. That’s because the explanation of the sonnets is external to the room itself.
Replace the printer with a sophisticated robot that sits in a room composing sonnets. As the robot produces sonnet after sonnet, does the CSI in the room increase? Given that the room contains a robot which is capable of composing sonnets, the production of sonnets is highly probable. The robot with the sonnets has approximately the same probability as the robot by itself. The improbability of the state of the room has not significantly increased due to the robot’s action. As such, the CSI of the room has not increased.
At this point, Sal objects that this is inelegant. The entropy of a system is the sum of the entropy of its parts. However, the CSI of a system is not the sum of the CSI of its parts. That’s an unavoidable consequence of using probabilities. They don’t combine in a conserving manner. It would be convenient and elegant if the CSI was the sum of the CSI of its parts. However, they don’t.
However, what if the sophisticated robot were replaced with a human poet? If the human poet composes sonnets, does the CSI in the room increase? The question comes down to whether or not the the human is part of the system. If the human is internal to the system, the probability of the sonnets has to be calculated taking into account his presence and thus the production of the sonnets will not be an improbable event. On the other hand, if the human is not internal to the system, then the CSI of the room is increasing. Everything hinges on whether or not the human is part of the system.
Dembski defines intelligence as the complement of chance and necessity. This means that intelligence cannot be reduced to random chance or natural law. It cannot be represented as a stochastic process. Put another way, we cannot put probabilities on the actions of intelligent agents. We might be able to do so in certain cases, but in general intelligent agents do not follow probabilities. As a consequence, an intelligent agent can never be internal to the system. In order to calculate CSI, we need to calculate probabilities and if there is an intelligent agent in the system, the probabilities will not exist. Thus, intelligent agents are always external to the system, and thus have the capability of inserting CSI into the system.
When applying CSI to a system, it tells us whether or not the system needs something external to the system in order to explain it. After defining the boundaries of the system, we can ask whether or not the system itself explains its own state. If it doesn’t, we know something external to the system must have influenced the system. Intelligence is taken to be external to the system because intelligence is difficult or impossible to reduce to probabilities. That its why we say intelligent agents generate CSI, but robots don’t.