The first mistake is the failure to distinguish between classical forms of information vs. functional information, and is described in a short 2003 Nature article by Jack Szostak.(2) In the words of Szostak, classical information theory “does not consider the meaning of a message.” Furthermore, classical approaches, such as Kolmogorov complexity,(3) “fail to account for the redundancy inherent in the fact that many related sequences are structurally and functionally equivalent.” It matters a great deal to biological life whether an amino acid sequence is functional or not. Life also depends upon the fact that numerous sequences can code for the same function, in order to increase functional survivability in the face of the inevitable steady stream of mutations. Consequently, Szostak suggested “a new measure of information – functional information”.
The second mistake is closely related to the first mistake discussed above. It is the belief that entropy = functional information. … More.
Of course, it is not a mistake, really. Fourth rate science teachers typically don’t understand the problem. Smart naturalists, who understand very well, keep generating confusion in order to prevent the problem from being honestly discussed.
See also: Protein families are still improbably astonishing – retraction of Matlock and Swamidass paper in order?
Follow UD News at Twitter!