Evolution Information Intelligent Design

Lee Spetner on evolution and information

Spread the love

From Lee Spetner, author of The Evolution Revolution at True Origin:

Many years ago I published a paper that pointed out that the evolutionary process can be characterized as a transmission of information from the environment to the genome (Spetner 1964). Somewhat later I wrote that there is no known random mutation that adds information to the genome (Spetner 1997). This statement in one form or another has found its way into the debate on evolution versus creation. Evolutionists have distorted the statement to attack it, as in Claim CB102, where Isaak has written his target for attack as, ‘Mutations are random noise; they do not add information. Evolution cannot cause an increase in information.’ Perhaps something like this statement was indeed made in an argument by someone, but Isaak has distorted its meaning. For his ‘refutation’ he writes the following 4 points (the references are his citations): More.

See also: Lee Spetner answers his critics

161 Replies to “Lee Spetner on evolution and information

  1. 1
    ET says:

    Gene duplication? Really? Did the genes’ binding sites get duplicated too? If not then the newly duplicated gene will need one before it can be expressed. It also has to land on a spot along the chromosome that isn’t buried in the spool. And if that isn’t bad enough Dr Wagner’s book “Arrival of the Fittest” says that even changing the DNA of a gene may not change the protein produced. This is due to the degeneracy of the genetic code along with the degeneracy of the proteome. He even says that two proteins with the same shape do not need the same DNA sequence.

    Add to all of that the paper witing for two mutations and it is clear that gene duplication is out as a blind and mindless process

  2. 2
    gpuccio says:

    Very interesting article. It is specially important what Spetner says about the adaptive role of Transposons. I absolutely agree.

    Indeed, I do believe that transposons are not only involved in environmental adaptation, as suggested by Spetner, but that they can also be tools of design in the generation of completely new functional information (see for example what is known about their role in the emergence of new genes). Transposon activity, so apparently random, is the best context to allow some intelligent guide, possibly through a quantum interface.

  3. 3
    ET says:

    gpuccio- Have you read his books “Not By Chance” and “the Evolution Revolution”?

    Transposons contain the code for two of the enzymes required to move the TE’s around.

  4. 4
    Dionisio says:

    gpuccio @2:

    How would the quantum interface work in the context of transposon activity?

  5. 5
    gpuccio says:

    ET:

    No, I have not read his books, but I am tempted! 🙂

    Yes, I know that transposons contain the code for the enzymes required to move the TE’s around.

  6. 6
    Bob O'H says:

    Hm. Spetner includes this quote from Isaak:

    Creationists get by with this claim [that mutations don’t add information] only by leaving the term ‘information’ undefined, impossibly vague, or constantly shifting.

    Inevitably he then leaves the term ‘information’ undefined and impossibly vague. It may be constantly shifting, but because it’s impossibly vague I can’t tell.

    Oh, and he fails to distinguish between mutations being non-random and variation in mutation rates.

  7. 7
    gpuccio says:

    Dionisio:

    My idea is:

    we have two different models where there seems to be an interface between a consciousness and a physical object.

    The first if us: our consciousness, our subjective representations, and our objective brain.

    Nobody can deny that there is a two way connection between subjective representations and objective events in the brain.

    Now, if we don’t accept the unsupported idea of strong AI theory, that subjective experiences arise from the complexities of configuration of objective events (and I certainly don’t accept it, for a lot of reasons), now we have to explain how subjective representations interact with physical matter in the brain.

    The idea that a quantum interface can be the solution is not certainly mine: it goes back to Eccles. Here is the abstract of an article by Beck on the subject:

    https://www.neuroquantology.com/index.php/journal/article/view/168

    “In the struggle about the role of monism and dualism as basic concepts of the mind-brain relation the indeterminacy of quantum events has led an increasing number of authors to postulate quantum brain dynamics as the key towards a scientific understanding of consciousness. In most cases some specific form of macroscopic quantum states in the brain are invoked without giving details for their occurrence or persistence. This raises immediately the question how such states could survive thermal fluctuations in the hot and wet brain environment. In this paper we present a model for a quantum mechanical trigger which regulates synaptic exocytosis, the regulator for ordered brain activity. The model is based on a quantum mechanical tunnelling process which is stable against thermal fluctuations and consistent with the physiological conditions of the synaptic membrane.”

    OK. This is a well established field of speculation, even if certainly controversial, in physics (see also Penrose), neurophysiology and philosophy.

    The extension of this concept to biological design is more a personal speculation, even if I am sure that others have suggested something more or less similar.

    My idea is: once we accept a design explanation for biological information (and I certainly do), beyond the problem of who the designer is, there is the important problem of how did he interact with biological matter. Indeed, the whole concept of design implementation implies some physical interaction with the physical objects that need to receive the designed configuration.

    Of course, when the designer acts through a physical body, like in the case of humans, there is no problem at all: the basic interface between human consciousness and the brain and body (whatever its nature) is enough to convey the information to the human body, which is perfectly capable to transmit that information to the objects to be designed.

    But, in the case of biological design, the scenario of designers with a physical body is not the most appealing, except for those who believe in aliens as biological designers.

    In all other reasonable scenarios, be it a god or some conscious force, or some non physical kind of beings, the problem arises of how a conscious designer, who has not a physical body as we conceive it, can interact with matter.

    So, my reasoning is: this scenario is not so different form the scenario of our human consciousness interacting with the brain. Again, we have a conscious being and, on the other end, a physical reality: not the human brain and body, but biological matter.

    So, I believe that we can invoke a similar hypothesis: a quantum interface that models events that follow the intrinsic quantum laws of probability, but use them to generate functional states, without apparently violating the laws themselves.

    Now, in the case of the human brain interface, the idea is that consciousness interacts through quantum events in the neurons, especially synapses, through some control of pseudo-random events.

    In the biological interface, we equally need random or pseudo-random events that can be modeled to functional states without violating physical laws. IMO, the best candidates are guided apparently random mutations and random variation in general, and more specifically pseudo-random variation through guided transposon activity.

    When transposons move in the genome, we think that the results are random. But they could well be guided at quantum level. Quantum events can easily change critical macroscopic states, in particular circumstances.

    Well, this is my general idea. It’s, of course, highly speculative, but I cannot think of better models, with what we know at present.

  8. 8
    ET says:

    Bob O’H:

    Inevitably he then leaves the term ‘information’ undefined and impossibly vague.

    Read his books. It’s all there.

    Oh, and he fails to distinguish between mutations being non-random and variation in mutation rates.

    Just saying it doesn’t make it so. You actually have to make your case, which you haven’t.

  9. 9
    Dionisio says:

    gpuccio @7,

    Thank you for the excellent explanation.
    As usual, I have to chew and digest it –including the information in the link you provided– before I can claim that I understand it well.

    Definitely this is a very interesting and important topic. Perhaps your comment @7 –with a few editorial adjustments– could be the OP for a separate discussion thread?

    BTW, in the second statement ‘if’ should read ‘is’, right?

    I think this is a stong candidate for a discussion topic at our dinner in a restaurant by the littoral in Palermo. 🙂

  10. 10
    Bob O'H says:

    ET – can you summarise what Spetner means by information? Even if I did try to get his book, it would take some time to arrive (and be quite expensive).

    On non-random mutation and mutation rates, this is the essence of Spetner’s argument from the piece:

    Transposable elements (TE) in the genome are known to cause complex mutations when they are activated. Their activation is not random; they are activated by stress.

    So, under stress TEs are activated, and they can cause mutations. I don’t think this is controversial. But that means that the mutation rate is increased (and also that it is not random over the genome, as IIRC TEs need specific short sequences to insert into). But he gives no evidence that they are directed: they’ll insert into any target sequence, regardless of effect on phenotype.

    Ogf course, it’s possible that Spetner is using “non-random” and “directed” to mean that TEs will insert into specific sequences, and that the rate of insertion can vary, but that would seem to be a much milder claim.

  11. 11
    gpuccio says:

    Dionisio:

    “BTW, in the second statement ‘if’ should read ‘is’, right?!

    Yes. 🙂

  12. 12
    ET says:

    Buy a dictionary, Bob O’H. He uses the standard and accepted meanings of the word. And Shannon- Weaver and Crick are OK too.

    But he gives no evidence that they are directed: they’ll insert into any target sequence, regardless of effect on phenotype.

    They occur only when needed and transposons carry within their sequence the coding for two of the enzymes required for the TE’s to move around. I would love to hear how natural selection did that. Anyone?

  13. 13
    Bob O'H says:

    ET @ 12 – the problem is that there are multiple standard and accepted meanings of information. If Shannon-Weaver is OK, then duplication does increase information,simply because it increases the number of terms in the summation. As Spetner claims that duplication doesn’t increase information, I’m guessing that’s not the definition he’s using.

    You’re not trying to defend the suggestion that mutations due to TEs are directed, so does that mean you agree with me on that score?

  14. 14
    ET says:

    Bob, Two copies of the same thing does not increase the information. And gene duplication has never been demonstrated to be a random process, ie an accident. As for Information Theory, well Dr Spetner is an expert.

    That said he is talking about the information required to build and maintain an organism. He says:

    The information required for large-scale evolution cannot come from random variations. There are cases in which random mutations do lead to evolution on a small scale. But it turns out that, in these instances, no information is added to the organism. Most often, information is lost- Not By Chance page vii

  15. 15
    ET says:

    As for transposons he brings up the work of Dr McClintock and also Barry Warner who posited that TEs are part of a control system that would produce heritable changes in response to environmental cues.

  16. 16
    ET says:

    More on Dr Spetner:

    I received a PhD degree in physics from MIT in 1950, and joined APL (applied physics lab) in 1951. I spent most of my professional career doing research and development on information processing in electronic systems, and teaching information and communication theory. After I had been at APL for about a dozen years, I was offered a year’s fellowship in the university’s (Johns Hopkins) Department of Biophysics. There I was to solve the problems in the extraction of signal from noise in DNA electron-micrographs. I accepted the fellowship and, as it turns out, I learned a lot about biology. (he took courses in biology at Johns Hopkins)

    He goes on to say that he published several papers between 1964-1970 pertaining to the problem of how the information in living organisms could have developed.

    His book “Not By Chance” was a direct response to Dawkins’ “The Blind Watchmaker”

  17. 17
    Dionisio says:

    @9 error correction: strong

  18. 18
    Dionisio says:

    gpuccio:

    Off topic observation: apparently the author of the paper in the link you provided @7 died the same year the paper was published (2008). At least that’s what the DOI seems to indicate.

  19. 19
    ET says:

    A duplicated gene would be redundant. And as such doesn’t add information.

  20. 20
    Dionisio says:

    Bob O’H @13:

    If it does not relate to complex functionally specified information, then it shouldn’t be relevant to the central idea of the ID paradigm.
    As I have stated before (more than once), I’m not an ID proponent per se, but I definitely agree with their central scientific idea –at least in reference to biology– that the biological systems are intelligently designed.
    As far as I’m aware of, complex functionally specified information is not fully quantifiable. Engineering design and software development are not fully quantifiable either. There are important conceptualization steps that are fully “inspirational”, hence can’t be measured in numbers. My project manager –a brilliant engineer– had the initial concept of the software product in his mind before he explained it got written in the tech specs for the programmers to develop it. I can’t think of any way to measure in numbers that initial process in the development.
    The idea was first, the software product was the result.
    The idea was first, the biological systems were the result.

  21. 21
    rvb8 says:

    ‘Information’, in the genome really is an ID concept. Evolutionary biologists never talk about ‘information’ in the genome, they talk about replication, errors, expression, genotype, and phenotype.

    The term ‘information’, was popularised largely by Dembski, was latched on upon by people desperate for the hand of God in nature, and is now popular in many places, excepting those places where Biology is understood.

    Ken Ham of AIG is now beginning to use the term, ‘information’, this should be worrying for ID folk, as Ham is nothing but a clear YEC.

  22. 22
    Dionisio says:

    @20 error correction:

    “…before he explained it and it got written in the tech specs…”

  23. 23
    Dionisio says:

    @21:

    The term “information” is very generic and can be applied to many different concepts. In the biology literature we see that term employed regularly in different contexts and with different meanings. Just see the numerous papers referenced in the thread “Mystery at the heart of life” and verify this by yourself. Don’t rely on what others say. Test everything and hold what is good.

    Here’s an example:

    Used over 20 times in the non-ID paper referenced through the link provided here:

    https://uncommondescent.com/intelligent-design/neuroscience-tries-to-be-physics-asks-is-matter-conscious/#comment-636746

    Page 3:
    According to this, and in contrast to the classical view, the information processing units are not the nerve cells but smaller quantum physical processes inside the cells and forming connections between them.

    Page 4:
    At the same time it is seen that when the experimental apparatus and the experimenter are regarded as a unified system, there is a problem with collecting and interpreting the information of that system.

    When expressing a quantum system with classical information, there is a loss of information.

    We, as the experimenter/observer, change the relationship between the autonomy and reliability of the information obtained from the system (von Lucadou, 1995), and this causes a loss of information (Weizsacker and Weizsacker, 1972).

    In addition, the measurement instruments and the characteristics of micro-universe particles which are carried to the macro universe such as position, velocity and momentum interact and are recorded.

    In this process information loss occurs and the level of loss is closely related to the descriptive language which science uses.

    page 5:

    What can be understood is that we urgently need a paradigm shift in the way we understand and explain this new information […]

    Page 6:

    Quantum physics can show itself in biological structures in the shape of superposition, entanglement, quantum information processing and matter–wave interaction, and there is much experimental proof on this topic […]

    This congregation of retinal cells is where the information from light undergoes its first processing.

    Does quantum collapse occur only in the presence of human beings? Does it happen before measurement information reaches the retina, or after?

    Page 7:

    […] the information is on average flowing from the microscopic level to the macroscopic or mesoscopic level.

    […] the collapse of the wave function can occur without the presence of a conscious human being or in other words that photons do collapse within the retina and subsequent processing of information at the level of neural membranes proceeds […]

    Page 8:

    In our daily lives, we can decode the very complex magnetic information in radio and television and transform it into sound and vision.

    Similarly, the nerve information in the brain may be represented electromagnetically.

    Coherent firing in nerve cells may provide the possibility for information to pass from the nerve cells to the magnetic field […]

    Page 10:

    Tubulins behave in this situation as qubits, the operational unit of quantum information.

    Spin mediated consciousness theory, quantum brain dynamics and the Orch OR theory may bring an explanation to the mechanisms of the consciousness–brain connection problem, free will, the unity of consciousness, qualia, non- algorithmic information processing, remembering and forgetting over time, and the effect of anesthesia.

    Page 11:

    The logic of the normal unconscious mind and the schizophrenic consciousness may therefore be Lq, or the logic of quantum information […]

    Page 12:

    In Lq, propositions are configured in qubits and the formal interpretation of the unconscious mind may potentially be understood as quantum-informational.

    These considerations lead to the possibility of building a different approach for information; two possible complementary ways are the study of quantum gates on qubits and Licata’s Hamiltonians with constraints.

    The geometric approach to the handling of quantum information is more fruitful when we try to study the global evolution of a system without forcing its non-local nature in any way.

    What we call science is the systematisation of information obtained from nature.

  24. 24
    ET says:

    rvb8- Sir Francis Crick is the one who started the information with respect to biology path.

  25. 25
    Bob O'H says:

    ET @ 14 – You’re clearly not using Shannon-Weaver information: mathematically duplication has to increase the Shannon-Weaver information (because each term in the summation is non-negative, and duplication increases the number of terms). Fine, but that still doesn’t explain what definition of information Spetner uses.

  26. 26
    Mung says:

    Sir Francis Crick is the one who started the information with respect to biology path.

    I sincerely doubt that this is the case.

  27. 27
    Mung says:

    You’re clearly not using Shannon-Weaver information: mathematically duplication has to increase the Shannon-Weaver information (because each term in the summation is non-negative, and duplication increases the number of terms).

    How does a simple duplication change the probabilities involved? And if there is no change in the probability distribution there is no change in the Shannon information sense.

  28. 28
    ET says:

    Bob O’H- You clearly don’t understand what Shannon-Weaver said. Spetner taught Information and communication theory. And you?

    Dr Spetner talks about information in the sense of information theory. That much is in his books.

  29. 29
    ET says:

    Mung@ 26- Was their someone else before Crick who talked about information with respect to biology? Probably was but he seems to be the one who defined it in terms of sequence specificity.

  30. 30
    Mung says:

    ET, the fact that he taught it doesn’t count for much. All sorts of people teach about entropy while still lacking any understanding of what it is.

  31. 31
  32. 32
    ET says:

    Information means here the precise determination of sequence, either of bases in the nucleic acid or on amino acid residues in the protein. Sir Francis Crick in “Central Dogma”

  33. 33
    ET says:

    Any examples of people teaching about entropy while lacking any understanding of it? Any examples of Dr Spetner misusing the word information with respect to information theory (or not understanding IT)?

  34. 34
    rvb8 says:

    ET @32,

    when did Francis Crick become Sir Francis Crick?

    ‘Christianity may be OK between consenting adults in private but should not be taught to young children.’ Francis Crick.

    Your turn to quote mine. It is after all the central pillar of ID argument; get some one famous, preferrably Darwin, and twist, and mutilate there words.

  35. 35
    Mung says:

    Your turn to quote mine. It is after all the central pillar of anti-ID argumentation; get some one famous, preferrably Behe, and twist, and mutilate their words.

    🙂

  36. 36
    ET says:

    rvb8- Crick was knighted in 1999. Now it’s time for you to show there was a quote-mine as opposed to mindlessly just saying it. But we all know that you cannot.

  37. 37
    Bob O'H says:

    Mung @ 27 – Shannon information is – sum p log p. So if you create a duplicate copy, you double the number of terms. So the information is – 2 sum p log p.

    There is a change in the distribution: duplication even changes the dimensions of the distribution!

  38. 38
    Bob O'H says:

    ET @ 28 – I don’t understand what Spetner means by information – he doesn’t explain it, and your explanations don’t make any sense, as I’ve explained. It’s possible that you don’t understand what he writes either, which would explain why your explanation doesn’t seem to make sense.

  39. 39
    rvb8 says:

    ET,

    he refused a CBE in 1963, and is often referred to incorrectly as ‘Sir’.

    Sorry!

  40. 40
    ET says:

    Bob O’H- Why is it that only evolutionists seem to have issues with the word “information”?

  41. 41
    ET says:

    Bob O’H:

    So if you create a duplicate copy, you double the number of terms.

    Redundancy does not add information. Clearly you don’t grasp information theory.

  42. 42
    Bob O'H says:

    ET @ 41 – Do you have a reference for the proof of that?

  43. 43
    ET says:

    Google is your friend:

    Information Theory Notes:

    Redundancy does not give new information but is essential to communication

  44. 44
    Bob O'H says:

    And now can you give give a reference that gives the proof?

    Oh, and my apologies – I thought you were using “redundency” as being synonymous with duplication. I’d like to see a proof that duplication doesn’t add information (as in information-theoretic information, of course).

  45. 45
    ET says:

    NEW information, Bob. Do TRY to keep up. And the proof is in the FACT that the SAME information is in the duplicate.

    Do tell, Bob, where is the NEW information in a duplicate copy?

  46. 46
    Mung says:

    And the proof is in the FACT that the SAME information is in the duplicate.

    We have no way to measure that.

  47. 47
  48. 48
    gpuccio says:

    Bob O’H:

    Let me understand. Are you saying that if you print 1000 copies of a novel, the information is multiplied by 1000?

    But, if you want to transmit that information (Shannon scenario), wouldn’t it be enough to send the content of the novel, and just add “repeated 1000 times”?

    Just to understand your point.

  49. 49
    Mung says:

    Bob O’H:

    Let me understand. Are you saying that if you print 1000 copies of a novel, the information is multiplied by 1000?

    We have no way to measure that sort of information. So we cannot say whether it would be multiplied by 1000 times. I have a theory of information decrease in which information decreases when it is copied. 😉

    So there is more information in a zygote than in an adult, because of all the copying that goes on.

  50. 50
    ET says:

    Mung:

    So there is more information in a zygote than in an adult, because of all the copying that goes on.

    They would be the same. The zygote would have more information than any one adult cell but the adult, being a sum of the zygote’s information and an aggregate of many cells, should have the same amount of information as the zygote.

    Mung doesn’t even understand his own theory 😛 😎

  51. 51
    gpuccio says:

    Mung:

    “I have a theory of information decrease in which information decreases when it is copied.”

    Well, the value of a painting certainly decreases when it is copied.

  52. 52
    ET says:

    That would all depend on the painting, who and how it was copied.

  53. 53
    gpuccio says:

    ET:

    “That would all depend on the painting, who and how it was copied.”

    Correct. How complex is information theory…

    No wonder darwinists do not understand it! 🙂

  54. 54
    Bob O'H says:

    ET @ 45 – I thought we were discussing Shannon information here. If you are using some other definition of ‘information’, can you please tell me what sort you are using.

    gpuccio @ 48 – In essence, using Shannon information, then yes. You would transmit the same information 1000 times so the information is the information in one novel times 1000. What you are describing (send it once followed by “times 1000”) is closer to Rissanen’s idea of Minimum Description Length.

  55. 55
    Dionisio says:

    gpuccio @48:

    […] wouldn’t it be enough to send the content of the novel, and just add “repeated 1000 times”?

    Somehow the name Kolmogorov seems to come to mind, doesn’t it?

    🙂

  56. 56
    ET says:

    Bob O’H:

    I thought we were discussing Shannon information here.

    We are. Clearly you don’t have any idea what information theory says. Redundancy does not add NEW information under information theory.

    Google is your friend. Search on “information theory and redundancy”

  57. 57
    Mung says:

    “Information” in “information theory” does not carry the same meaning as “information” in colloquial use. Neither does “redundundancy.”

  58. 58
    ET says:

    Umm, information in information theory doesn’t care about meaning, ie information in colloquial use, because the equipment used to send, receive and store it doesn’t care. If meaning were relevant then encrypted messages could never be sent until you somehow convinced the transmitter of the meaning of the message.

  59. 59
    gpuccio says:

    Dionisio:

    “Somehow the name Kolmogorov seems to come to mind, doesn’t it?”

    Absolutely! 🙂

  60. 60
    gpuccio says:

    Mung:

    “Neither does “redundundancy.””

    Is one “dun” redundant?

    You are a genius in self-reference! 🙂

  61. 61
    gpuccio says:

    Bob O’H:

    Then I would say that duplication adds no functional information to the system. Functional information is the real thing in ID theory.

    Of course, it is not completely true, even so. A duplication where both genes are translated can add or subtract to the functionality of the system: indeed, duplications are often deleterious, but in rare cases they could be of help.

    The important point is that duplication is a rather simple transition (not completely simple, but certainly simpler than finding a new long functional sequence. So, a duplication that influences the existing sequence could be considered as a not too complex event of tweaking of that function (in a negative or positive sense).

  62. 62
    PaV says:

    Bob O’H:

    Mung @ 27 – Shannon information is – sum p log p. So if you create a duplicate copy, you double the number of terms. So the information is – 2 sum p log p.

    Bob: Shannon ‘information’has to do with “communication.” If I have to transmit two copies of the same letter over an electronic device, that’s twice as much ‘information’ that has to be communicated. That is, Shannon Information, or, if you prefer, Shannon Weaver Information, is a defective definition when it comes to what the human mind understands as ‘information.’

    So, let’s not pretend to not understand the distinction involved here

    I don’t understand what Spetner means by information – he doesn’t explain it, and your explanations don’t make any sense, as I’ve explained. It’s possible that you don’t understand what he writes either, which would explain why your explanation doesn’t seem to make sense.

    Bob, are you looking for a definition of information, or, are you looking to foil ID’s effort to identify information? I’m curious.

    What if I told you that there is such a thing as a “quantitative measure of information”? What if I told you that it comes from someone who worked for Bell Labs in the 1920’s? Does that pique your interest? Do you think that would resolve the impasses surrounding this topic?

  63. 63
    PaV says:

    Bob O’H:

    Darwin wrote Origin of Species. What is Darwin’s definition of “species”?

  64. 64
    PaV says:

    From Wikiwand:

    “Species Problem”

    Does this completely undermine, and invalidate, evolutionary science?

  65. 65
    Bob O'H says:

    gpuccio @ 61 – fair enough, if you want to use “functional information” rather than “information”. But ET didn’t think that was needed, and Spetner didn’t define information at all.

    PaV @ 62 – I was specifically interested in how Spetner defined information: that’s why I wrote “I don’t understand what Spetner means by information”.

    If you had read my comment at 27 – you know, the bit you quoted – then you would know that I am aware of methods to quantify information, in particular Shannon’s approach.

  66. 66
    PaV says:

    Bob O’H:

    I was looking around on the net for Spetner’s idea of information. In a Wikipedia article on Spetner, it mentions “genetic information.” If you click on this linked phrase, you get: “nucleic acid sequence.” So, why not use that as Spetner’s definition of information: nucleic acid sequence. Wikipedia is our authoritative source on this.

  67. 67
    ET says:

    Again for the learning impaired:

    Dr Spetner uses “information” as it is used in information theory. That much is clear in his book “Not By Chance”.

    You’re clearly not using Shannon-Weaver information: mathematically duplication has to increase the Shannon-Weaver information (because each term in the summation is non-negative, and duplication increases the number of terms).

    That is false. Redundancy does not add new information and a duplicate copy is a form of redundancy. I even provided two references to support my claim. Bob O’H hasn’t provided any references to support his.

    Dr Spetner was talking about adding NEW information via genetic accidents, ie random mutation, and there still isn’t any evidence that gene duplication is a random mutation.

  68. 68
    Bob O'H says:

    Joe @ 67 – you have provided no references that say that Shannon information is the same when there is duplication. If you’re going to provide references that support your claim, then it helps if they support your claim. Where, for example, does the wikipedia page on redundancy say that Shannon information is the same under duplication? If anything, it say that duplication does change the information: otherwise the redundancy would be 0 (assuming a single copy had no redundancy).

  69. 69
    Mung says:

    gpuccio:

    Is one “dun” redundant?

    Good one! 🙂

  70. 70
    Mung says:

    Bob O’H:

    Joe @ 67 – you…

    lol

  71. 71
    ET says:

    Dick:

    you have provided no references that say that Shannon information is the same when there is duplication.

    Of course I have. See comments 43 and 47. NEW information, Dick. The argument pertains to NEW information.

  72. 72
    ET says:

    dick@ 25:

    You’re clearly not using Shannon-Weaver information: mathematically duplication has to increase the Shannon-Weaver information (because each term in the summation is non-negative, and duplication increases the number of terms).

    Irrelevant because the topic pertains to NEW information. You clearly are unable to follow along and you are just on an agenda of willful ignorance

  73. 73
    Bob O'H says:

    ET @ 71 & 72 – I asked about Spetner’s definition of information. You said Shannon’s is OK. Now you’re changing tis to say it’s about new information. OK, but I’ll ask again – what definition of information is Spetner using?

  74. 74
    ET says:

    LoL! Bob, did you even read the article at true origins? The entire thing pertains to random mutations producing NEW information. So no, I didn’t change anything.

  75. 75
    ET says:

    From reading Drs Spetner, Crick and Meyer (Stephen C.) the following is the definition used:

    the attribute inherent in and communicated by one of two or more alternative sequences or arrangements of something (such as nucleotides in DNA or binary digits in a computer program) that produce specific effects Merriam-Webster

  76. 76
    Mung says:

    I just cracked open Spetner’s Not By Chance again and he uses the term “information” in it’s everyday colloquial sense.

    ET go home 🙂

  77. 77
    PaV says:

    Bob O’H:

    I asked you above if it would interest you that there was a “quantitative measure of information” given in 1928. You didn’t respond.

    The ‘measure’ was given by Ralph Hartley, who, at the time, was working for Bell Labs–same as Shannon years later.

    His ‘measure’ was log s^n. S stands for symbol. N stands for the number of times an “s” is chosen.

    For a DNA sequence, s=4 (nucleotides), and n=sequence length.

    This ought to sound like “nucleic acid sequence” which is Wikipedia’s definition of “genetic information.”

    Isn’t this all simply straight forward?

    Let me add that Hartley discussed ‘information’ as such. He wrote about human language. And he searched for a way that we can eliminate the “psychological” meaning of “information” so as to arrive at a “quantitative measure.”

    Hartley’s arguments and results all presage Claude Shannon.

    Why all the suberfuge? Why try and fool ourselves?

  78. 78
    ET says:

    Mung, Read the Preface and then page 27. Compare to what I have posted- page 27 supports comment 75 and the preface supports the claim he is using information theory

  79. 79
    Bob O'H says:

    PaV @ 77 – I’ll leave 1928 to the historians. Unless you want to claim that Spetner used this 1928 definition of information.

    ET – yes, I did read it. It’s because Spetner didn’t define information that I asked what his definition was. I assume you can now quote where Spetner says he uses the definition you give at 75. I’d also be interested in knowing how he quantifies that.

  80. 80
    ET says:

    Hi Bob- You say you read the article and yet your post about new information says otherwise. Oh well.

    The quantification is found in information theory. See comment 14 for one quote pertaining to the information required to build organisms and on page 27 –

    The information in a cell plays a role much like that played by information in a factory. The production file in a factory contains a set of instructions that tell what each worker has to do at each stage. The production file is information carried by printed symbols; the developmental instructions in the cell are information carried by molecular symbols.

    In the preface he talks about information theory, Claude Shannon and how it relates to biology.

  81. 81
    ET says:

    You can gain some idea of how much information is in an organism both from the size of its genome and from the complexity of both the organism’s structure and its function [of the genome]- page 70

    He then goes on talking about information in the normal sense- to inform/ have meaning

  82. 82
    Mung says:

    ET:

    In the preface he talks about information theory, Claude Shannon and how it relates to biology.

    In the preface he talks about information and mentions information theory as if it’s somehow relevant. But if you actually look at the way he uses the term “information” in the preface he uses it in its everyday colloquial sense, not in the Shannon or information theory sense.

    ETA: If you look up the phrase “information theory” in the index it appears on one single page of the book. p. vii. IOW, in the preface.

    ETA: ETA: If you look up “Shannon” in the index it appears on one single page of the book. p. vii. IOW, in the preface.

  83. 83
    Mung says:

    The information in a cell plays a role much like that played by information in a factory. The production file in a factory contains a set of instructions that tell what each worker has to do at each stage. The production file is information carried by printed symbols; the developmental instructions in the cell are information carried by molecular symbols. (p. 27)

    He is clearly using the term in it’s everyday colloquial sense, not the sense in which it is used in Information Theory (i.e., the Shannon sense).

  84. 84
    ET says:

    The two are not mutually exclusive.

  85. 85
    ET says:

    Shannon provided a way to measure it (information) once we found/ received it. And then how to separate the signal from the noise

  86. 86
    Mung says:

    ET:

    The two are not mutually exclusive.

    No one said otherwise. But the term has a different meaning in information theory than that in colloquial use

    ET:

    Shannon provided a way to measure it (information) …

    No, he didn’t.

  87. 87
    Mung says:

    I swear. There ought to be a course on information that people have to take and a test that they have to pass in order to become a part of the ID movement.

  88. 88
    ET says:

    Congratulations Mung, you now “argue” like the TSZ ilk. Condescending, unhelpful, spam that says you are trying to hide something

    So the “Mathematical Theory of Communication” didn’t provide a way to quantify information? Really?

    I swear there ought to be a tar & feathering offense for people who just say stuff as opposed to actually making a case.

    This guy must be an IT moron:

    http://www.cs.cmu.edu/~dst/Tutorials/Info-Theory/

    wikipedia isn’t much better:

    https://en.wikipedia.org/wiki/Information_theory

    OK, so if it wasn’t Shannon, who did provide the way we measure information? And what is this alleged different meaning you are talking about?

  89. 89
    Bob O'H says:

    ET – the colloquial use of ‘information’ is vague and multi-faceted. What Shannon did was to define something in a specific context that could be interpreted as ‘information’ (or at last one facet of it). Kolmonogorov and R.A. Fisher also defined things that, in their different contexts, could also be interpreted as ‘information’.

    Shannon, Kolmonogorov and Fisher all provided ways to quantify information, as they defined it, but those definitions are not synonymous, and none capture the full range of informal meanings of ‘information’.

  90. 90
    ET says:

    Bob- The colloquial use of ‘evolution’ is vague and multi-faceted.

    Why is it that only anti-IDists have issues with the word “information”?

  91. 91
    Mung says:

    ET:
    > Why is it that only anti-IDists have issues with the word “information”?

    I have a problem with the abuse of it, especially when it makes IDists look, well, uninformed.

    So the “Mathematical Theory of Communication” didn’t provide a way to quantify information?

    No, it did not. Not in the sense you’re using the word.

    Have you ever read Werner Gitt? If not, let me recommend that you read his In the beginning was information.

  92. 92
    ET says:

    Mung- Your posts make you look, well, uninformed. And yes, the “Mathematical Theory of Communication” did provide a way to quantify information.

    Yes, I have read Gitt. Did you have a point? If so then make it. Stop acting like Richard T Hughes from TSZ

  93. 93
    Mung says:

    > Did you have a point?

    Yes. You’re wrong.

  94. 94
    ET says:

    > You’re wrong.

    That could be but you have yet to say what I am wrong about and you have failed to make a case that I am wrong.

  95. 95
    Mung says:

    Read page 58 of the Gitt book I referred to.

  96. 96
    ET says:

    There are two versions. One came out in 1997 and 2000 and the other came out in 2007. The 2007 version just says what I already knew- Shannon didn’t care about meaning because the equipment involved didn’t care about it.

    Gitt and Spetner disagree on what noise does to information.

    Page 58 of the original doesn’t talk about Shannon at all

    So it seems that you don’t have a point at all

  97. 97
    Mung says:

    LoL.

    Theorem 14: Any entity, to be accepted as information, must entail semantics; it must be meaningful.

    Semantics is an essential aspect of information …

    – Werner Gitt

    So it is not the case that the “Mathematical Theory of Communication” provided a way to quantify information, because it ignores meaning, which is “an essential aspect of information.”

    And Gitt writes that Shannon gave us a mathematical definition of information. See Appendix A1. Shannon’s measure only applies to his mathematical definition. It doesn’t apply to information as that term is usually understood.

  98. 98
    ET says:

    LoL! It ignores meaning because meaning is irrelevant to the measurement, which is actually a measurement of the information carrying capacity. We can apply Shannon’s methodology to information that matches Gitt’s theorem 14.

  99. 99
    ET says:

    Umm, Theorem 14 is on page 70 of the earlier copy and page 72 of the 2007 release.

    You still have yet to say what I am wrong about and you have failed to make a case that I am wrong.

  100. 100
    PaV says:

    Bob O’H:

    If you want to keep your head stuck in the sand, that’s your prerogative.

    Or, you can look at the Wikipedia page for Ralph Hartley, go to publications, find his 1928 paper, and read it. What a thought. Here’s the pdf.

    Shannon must have been familiar with Hartley’s work, and simply modified it to fit the focus of his interest in communication between devices. What Hartley spells out in principle, Shannon spells out in specifics, since technology had advanced.

    Nonetheless, the basic understanding of information comes from Hartley.

    Again, Bob, this is straightforward stuff, and you know it. It might be ‘convenient’ to deny that there is any strict definition of ‘information’; but, it also leads one to be disengenious.

    And, would you like to proffer a defintion of “species”? There is no definition of ‘species’ that satisfies everyone. Should we then dispense with evolutionary theory? Is that your position?

    Why do you demand of the ID community what you don’t demand of the Darwinist community?

  101. 101
    PaV says:

    [From Hartley’s 1928 Paper]

    It is desirable therefore to eliminate the psychological factors involved and to establish a measure of information in term of purely physical quantities. . .

    Suppose the sending operator (Morse Code) has at his disposal three positions of a sending key which correspond to applied voltages of the two polarities and to no applied voltage [Think nucleotides]. In making a selection he decides to direct attention to one of the three voltage conditions are symbols by throwing the key to the position corresponding to that symbol. The disturbance transmitted over the cable is then the result of a series of conscious sections. However, a similar sequence of arbitrarily chosen symbols might have been sent by an automatic mechanism which controlled the position of the key in accordance with the results of a series of chance operations such as a ball rolling into one of three pockets [Think of random mutations].

    The capacity of a system to transmit a particular sequence of symbols depe3nds upon the possibility of distinguishing at the receiving end between the results of the various selections made at the sending end. The operation of recognizing from the received record the sequence of symbols selected at the sending end may be carried out by those of us who are not familiar with the Morse code.
    Hence, in estimating the capacity of the physical system to transmit information we should ignore the question of interpretation, make each selection perfectly arbitrary, and base our result on the possibility of the receiver’s distinguishing the result of selecting any one symbol from that of selecting any other. We would do this equally well for a sequence representing a consciously chosen message and for one sent out by the automatic selecting device already referred to. A trained operator, however, would say that the sequence sent out by the automatic device was not intelligible. The reason for this is that only a limited number of the possible sequence have been assigned meanings common to him and the sending operator. Thus the number of symbols available to the sending operator at certain of his selections is here limited by psychological rather than physical considerations. [Think here of the ‘genetic code’]

    Other operators using other codes might make other selections Hence in estimating the capacity of the physical system to transmit information we should ignore the question of interpretation, make each selection perfectly arbitrary, and base our result on the possibility of the receivers’ distinguishing the result of selecting any one symbol from that of selecting any other [symbol]. [IOW, if there is a code, and if there is a way of distinguishing symbols, then information can be transmitted, and a basis for ‘information’ can be quantified.]

    By this means the psychological factors and their variations are eliminated and it become possible to set up a definite quantitative measure of information based on physical considerations alone.

    QUANTITATIVE EXPRESSION FOR INFORMATION

    At each selection there are available three possible symbols. Two successive selections make possible 3^2, or 9, different permutations or symbol sequences. Similarly n selections make possible 3^n different sequences. . . . Then the number of symbols available at each selection is s and the number of distinguishable sequences is sn. . . .

    Consider the case of a printing telegraph system of the Baudot type, in which the operator selects letters or other characters each of which when transmitted consists of a sequence of symbols (usually five in number) We may think of the various current values as primary symbols [think nucleortides] and the various sequences of these which represent characters as secondary symbols [think codons]. The selection may then be made at the sending end among either primary[think miRNA, piRNA, etc] or secondary symbols [codons]. At each selection he will have available as many different secondary symbols as there are different sequences that can result from making n1 selections from among the s primary symbols. If we call this number of secondary symbols s2 then

    s[sub]2 = s^n1. (1)

    . . . The number of possible sequences of secondary symbols that can result from n2 secondary selections is
    s[sub]2^n1 = s^n1n2 . (3)

    Now n1n2 is the number n of selections of primary symbols that would have been necessary to produce the same sequence had there been no mechanism for grouping the primary symbols into secondary symbols. Thus we see that the total number of possible sequences is sn regardless of whether or not the primary symbols are grouped for purposes of interpretation.

    This number s^n is then the number of possible sequences which we set out to find in the hope that it could be used as a measure of the information involved. Let us see how well it meets the requirements of such a measure.

    For a particular system and mode of operation s may be assumed to be fixed and the number of selections n increases as the communication proceeds. Hence with this measure the amount of information transmitted would increase exponentially with the number of selections and the contribution of a single selection to the total information transmitted would progressively increase. Doubtless some such increase does often occur in communication as viewed from a psychological standpoint. For example, the single word “yes” or “no,” when coming at the end of a protracted discussion, may have an extraordinarily great significance [Think of siRNAs]. However, such cases are the exception rather than the rule. The constant changing of the subject of discussion, and even of the individuals involved, has the effect in practice of confining the cumulative action of this exponential relation to comparatively short periods.

    Moreover we are setting up a measure which is to be independent of psychological factors. When we consider a physical transmission system we find no such exponential increase in the facilities necessary for transmitting the results of successive selections [That is, ribosomes and spliceosomes do just fine]. The various primary symbols involved are just as distinguishable at the receiving end for one primary selection as for another. A telegraph system finds one ten-word message no more difficult to transmit than the one which preceded it. A telephone system which transmits speech successfully now will continue to do so as long as the system remains unchanged [Think genetic code and central dogma]. In order then for a measure of information to be of practical engineering value it should be of such a nature that the information is proportional to the number of selections. The number of possible sequences is therefore not suitable for use directly as a measure of information.

    We may, however, use it as the basis for a derived measure which does meet the practical requirements. To do this we arbitrarily put the amount of information proportional to the number of selections and so choose the factor of proportionality as to make equal amounts of information correspond to equal numbers of possible sequences.

    For a particular system let the amount of information associated with n selections be

    H = Kn, (4)

    where K is a constant which depends on the number s of symbols available at each selection.

    Take any two systems for which s has the values s1 and s2 and let the corresponding constants be K1 and K2. We then define the4se constants by the condition that whenever the numbers of selections n1 and n2 for the two systems are such that the number of possible sequences is the same for both systems, then the amount of information is also the same for both; that is to say, when

    s[sub]1^n1 = s[sub]2^n2 , (5)

    H = K1n1 = K2n2 (6)

    from which

    K1 /log s1 = K2 / log s2 . (7)

    This relation will hold for all values of s only if K is connected with s by the relation

    K =K0 log s , (8)
    where K0 is the same for all systems[That is, the same logarithmic base]. Since K0 is arbitrary, we may omit it if we make the logarithmic base arbitrary. The particular base selected fixes the size [my emphasis] of the unit of information. Putting this value of K in (4),

    H = n log s (9)

    = log s^n . (10)

    What we have done then is to take as our practical measure of information the logarithm of the number of possible symbol sequences. [Think of s as a ‘codon’ and n as the number of codons per protein sequence.] . . .

    If we put n equal to unity, we see that the information associated with a single selection is the logarithm of the number of symbols available; for example, in the Baudot System refereed to above, the number s of primary symbols or current values [electric currents] is 2 and the information content of one selection is log 2; that of a character which involves 5 selections is 5 log 2. The same result is obtained if we regard a character as secondary symbol and take the logarithm f the number of these symbols, that is, log 2^5, or 5 log 2. The information associated with 100 characters will be 500 log 2. The numerical value of the information will depend upon the system of logarithms used [or, 500 ‘bits’ in a log2 base]. . . .

    When, as in the case just considered, the secondary symbols all involve the same number of primary selections, the relations are quite simple. [Unless, of course, we’re dealing with evolutionists, and then it becomes very complicated—‘needlessly’ complicated]

    P.S. The formatting of this text was next to impossible. My apologies for the difficulty this leads to.

  102. 102
    Mung says:

    ET: You still have yet to say what I am wrong about …

    Zzzzz…

  103. 103
    ET says:

    Pav,

    Bob O’H is just saying that Spetner never clearly defines the word ‘information’. My point is that Spetner is using Shannon’s methodology applied to genetic information, ie the information that builds organisms, which includes, but is not limited to, Crick’s definition:

    Information means here the precise determination of sequence, either of bases in the nucleic acid or on amino acid residues in the protein.

    Each nucleotide would have 2 bits of information carrying capacity.

  104. 104
    Mung says:

    ET:

    Bob O’H is just saying that Spetner never clearly defines the word ‘information’.

    Bob O’H is correct. Spetner, in Not By Chance, never clearly defines the word ‘information’.

    ET:

    My point is that Spetner is using Shannon’s methodology applied to genetic information…

    As pointed out above, Spetner, in his book Not By Chance only mentions Information Theory or Claude Shannon in the preface, after which they exit, stage left, never to be heard from again.

    You’re the one who referred Bob O’H to Spetner’s books (@8) as if he would find answers there. He may indeed find answers in them, but they will be such as to contradict your claims.

  105. 105
    Mung says:

    Let’s take a look at Spetner’s book, The Evolution Revolution.

    The information of life is presently thought to reside to a large extent in the molecules of deoxyribonudleic acid (DNA) that are the constituents of the genome (the set of genes and their controls) in every living cell, and is therefore called genetic information.

    Doesn’t define information. Tells us where it’s “thought to reside.” Clearly, it’s not the actual sequence of symbols of DNA that are the information.

  106. 106
    ET says:

    LoL! The preface sets the context and in Not By Chance he makes it clear that it is the actual sequence of symbols of DNA that are the information:

    The information in a cell plays a role much like that played by information in a factory. The production file in a factory contains a set of instructions that tell what each worker has to do at each stage. The production file is information carried by printed symbols; the developmental instructions in the cell are information carried by molecular symbols.

    The DNA is the starting point of the genetic code

  107. 107
    ET says:

    And Bob O’H wasn’t talking about Not By Chance- he was talking about the article linked to in the OP that never defined ‘information’. And Mung doesn’t even know what my claims are as he doesn’t seem to be able to comprehend what he reads.

    My claims are supported by Spetner himself. Write to him, Mung

  108. 108
    Bob O'H says:

    ET – are you saying that Spetner doesn’t use the concept of information in the article cite in the OP that he uses in Not By Chance?

  109. 109
    PaV says:

    ET:

    That’s always been the argument here at UD for over ten years: information is not defined properly; or, that specified complexity is not defined correctly; or, fill in the blank.

    My point in posting Hartley’s “quantitative measure of information” is two-fold: to put it in digital form; and to to illustrate that the concept is straightforward. Hartley actually addresses the issue of personal communication, which is what information is all about. And his defintion doesn’t lend itself to ‘entropy’ arguments, as does Shannon’s. Hartley’s is not a statistical mechanics argument.

    The problem here is this: whenever you speak with somone of the liberal persuasion, you’ll find out that to win an argument they’re losing, they will at some point ask you: “What do you mean by …………….? You can simply fill in the blank. Remember, everything depends on “what the definition of is is.”

    Some people don’t want to know the Truth.

  110. 110
    ET says:

    Bob O’H:

    are you saying that Spetner doesn’t use the concept of information in the article cite in the OP that he uses in Not By Chance?

    No. I was just correcting Mung as you were talking about the article and not his books. Had you read his books you wouldn’t be asking what Spetner meant when he talked about ‘information’.

  111. 111
    ET says:

    PaV,

    Yes, I understand. For me that was all put to rest with “Signature in the Cell” in which Meyer defined exactly what he was talking about by ‘information’- see my comment in 75 above

  112. 112
    Bob O'H says:

    PaV – three comments on Hartley:
    1. he assumes the symbols are equiprobable (an assumption Shannon relaxed).
    2. you can’t get away from entropy, because both entropy and Hartley’s approach are based on a multinomial likelihood, so his approach also has an entropic interpretation (equivalently, entropy has a multinomial interpretation).
    3. Hartley’s approach to information is pretty much the same as Shannon’s (for obvious historical reasons). So I don’t see how it helps in this discussion.

  113. 113
    PaV says:

    Bob O’H:

    PaV – three comments on Hartley:
    1. he assumes the symbols are equiprobable (an assumption Shannon relaxed).

    That’s because they are equiprobable. He’s talking about produced electrical currents of different voltages.

    They are equiprobable: the telegrapher can use any of the three ( or more) he chooses.

    2. you can’t get away from entropy, because both entropy and Hartley’s approach are based on a multinomial likelihood, so his approach also has an entropic interpretation (equivalently, entropy has a multinomial interpretation).

    Just because I’m using a combinatorial approach doesn’t make it “entropic.”

    Shannon’s argument, IIRC, has nothing at all to do with entropy. It’s simply that the formulas he uses to describe his notion of information resembles the Boltzman equation for statistical mechanics.

    Further, entropy has to do with the ‘degrees of freedom’ of matter. Generally each atom has three. You can’t associate 26 degrees of freedom with atoms, as you can with the English language.

    3. Hartley’s approach to information is pretty much the same as Shannon’s (for obvious historical reasons). So I don’t see how it helps in this discussion.

    Hartley develops his idea of an information measure in a very different way than does Shannon. Shannon’s formula involves ‘probabilities.’ Hartley’s does not. It involves the number of symbols/characters that are being used, and the number of selections that are made.

    They’re entirely different. And, Hartley’s definition was first–that is, the most basic and intuitive. That is, the easiest to understand. Think here of Occham’s Razor.

    Shannone developed a ‘theory of communication.’ He uses the symbol ‘H’ for the amount of information from a “source.” This is the very symbol Hartley uses for total information ‘received’ from the ‘sender’ = source.

    If you want to use Shannon’s definition and theory, then it applies to the ‘fidelity’ of the transcription process—which is a process by which the information received from DNA is faithfully translated into RNA.

    Wikipedia: “genetic information” = nucleic acid sequence. Hartley’s definition of information directly applies.

    Why do you make this so difficult?

    Just found this:

    In summary, although it is true that Shannon’s theory is not interested in individual amounts of information, this does not mean that those quantities cannot be defined. In fact, in the first paragraph of his paper, Shannon (1948) explicitly says that his own proposal aims at extending the work of Ralph Hartley (1928), where a logarithmic function is introduced as a measure of uncertainty in the case of equiprobability; the Hartley function can be viewed as measuring an individual entropy.

    What is Shannon Information.

    BTW, in the second part of Hartley’s paper, which I simply skimmed through, he deals with ‘varying,’ and not ‘fixed,’ currents of electricity. And in this case, an averaging is required. At that point, a statistical average is used (entropy). This corresponds to population genetics throughout deep time, not the transcription and translation that takes place in living cells, where these processes are extremely ‘hifi,’ and which rely on the constancy of ‘fixed,’ individual codons–not on statistical averages.

    Hence, Hartley’s original idea of information–which directly applies not only to DNA, but to computer codes themselves–is not replaced, but extended by Shannon; and, which, said extension is not needed in analyzing the information content of DNA.

  114. 114
    Mung says:

    Symbols can carry information, and trucks can carry televisions. But symbols are not information and trucks are not televisions.

    Most people manage to grasp the difference between the medium and the message. “Information Theory” is interested only in the medium.

  115. 115
    ET says:

    Trucks can carry symbols and symbols can be part of a drum set. Trucks can carry drum sets and you can beat on a truck like a drum set. You may even find a part that can be used as a symbol. Images of trucks can also be used as symbols. And if that image is on a thin sheet of metal it can be a symbol symbol.

    “Information Theory” provided us with a way to quantify information. And that is why Spetner said what he did in the preface. We can now quantify it- genetic information- (thanks to IT) which allows us to see if random mutations add new information and if the proposed process is even capable. Or maybe that was Spetner’s inference

  116. 116
    ET says:

    In the case of the genetic code, ie genetic information, the symbols carry the information. And we can measure it, thanks to IT.

  117. 117
    ET says:

    Mung:

    Symbols can carry information…

    Umm, a symbol that doesn’t carry any information is not a symbol. If it’s a symbol it carries information.

    But symbols are not information …

    And yet they are not symbols without being endowed with it.

    Information, communication and symbols- It’s the trinity of information theory. You can’t have one without the other. Well maybe Mung can…

  118. 118
    mike1962 says:

    I would say symbols “carry” nothing. Rather, they trigger a response in the receiver because the receiver knows how to map the symbol to specific meaning intended by the sender. Maybe it’s a quibble, but I think “trigger” is a better term.

  119. 119
    ET says:

    So the mRNA (codons) triggers a polypeptide response from the ribosome? Or does the mRNA carry a message to the ribosome to produce this polypeptide?

    Messengers carry messages vs messages trigger responses. The first does seem redundant while the second is a little vague- what (type of) response? Is it the same for everyone? What if the triggered response isn’t the one expected?

    Who expected some kid dressed up liked a zombie to answer “I like turtles”, to a question about his costume? But hey it got him an appearance on Tosh.0.

    The more important question would be, under your analysis, what triggered that response given it had nothing to do with what the sender intended?

  120. 120
    Dionisio says:

    ET, mike1962:

    Carry, trigger,…?
    Interesting…
    What about represents?
    Or all of the above in different proportions?
    Or something else? 🙂
    Traffic lights carry, trigger, represent information?
    Stop signs?
    Morphogen gradients?
    mRNA ?
    How are the proteins SATB1 and SATB2 in GP’s excellent recent thread formed? What are the gene sequences associated with those proteins?
    How does it go from the gene to the mRNA in those two cases? How does it go from the mRNA to the actual proteins?
    Perhaps GP could comment on this after he comes back from summer vacation.
    UB could give us a hand with this too.

  121. 121
    ET says:

    Yes D, symbols represent something. That is because they carry information. 😉

  122. 122
    mike1962 says:

    ET,

    So the mRNA (codons) triggers a polypeptide response from the ribosome? Or does the mRNA carry a message to the ribosome to produce this polypeptide?

    I would say the former.

    Messengers carry messages vs messages trigger responses. The first does seem redundant while the second is a little vague- what (type of) response? Is it the same for everyone? What if the triggered response isn’t the one expected?

    If a Chinese person sends a message of Kanji characters to me, nothing will be triggered, because I don’t understand Kanji. Nothing was “carried.” Only an attempt at a triggering in me was attempted (and failed.) The success of the message depends on my prior knowledge of Kanji. Without that prior knowledge, no meaning was “carried” to me.

    Who expected some kid dressed up liked a zombie to answer “I like turtles”, to a question about his costume? But hey it got him an appearance on Tosh.0.

    I don’t understand the reference.

    The more important question would be, under your analysis, what triggered that response given it had nothing to do with what the sender intended [sic]?

    Symbols can cause unintended triggering, or no triggering at all, that is, an effect unintended by the sender, because they can be misunderstood, or not understood at all, by the receiver. The reason is because symbols “carry” nothing. They are dependent on the receiver’s prior knowledge for there to be a proper utilization that is intended by the sender.

    Something that actually “carries” information (or anything else) is no longer merely a symbol, but is a “container.” If I send someone a picture of a sandwich, and they have no idea what a sandwich is, no information, beyond the mere picture itself, is transferred. However, if I send someone a lunch box with a sandwich in it, the receiver will receive the sandwich (and might even eat it, if she is smart enough to figure out that it’s food), regardless of the receiver’s prior understanding or experience of what a sandwich is. The lunch box is not merely a symbol, it is a container. Codons seem to me to be symbols and not containers.

  123. 123
    ET says:

    Symbols contain information. That is what makes them symbols.

    If a Chinese person sends a message of Kanji characters to me my curiosity would be triggered and I would try to find out what, if any, information they contained.

  124. 124
    mike1962 says:

    ET,

    What information does this contain? :

    OBDITMOTNTDBGUTFBTBTFEODTSASEOTDPHTNAGUASTTDBIYDBTLITJATBMHSIT?

    Clearly these are symbols, but what information is this string of symbols carrying?

    What about this sentence:

    Harry is on fire.

    What information is that string of symbols carrying?

  125. 125
    ET says:

    Hi mike1962-

    Each symbol contains the information of what it represents. And yes, sometimes symbols can be combined such that the information contained in that combination is greater than that of each symbol.

    How about this:

    Symbols trigger a response because they carry/ contain information. 😎

  126. 126
    mike1962 says:

    ET,

    We can quibble about the terms, and in a certain sense I would say that symbols “contain” or “carry” information, but there is a distinction I’m trying to make. And the distinction is: context. If I have an intention for you to have a sandwich, I can communicate that intention in two ways. I can send you a recipe of how you can build a sandwich, or I can send you a sandwich. In the first case, you won’t be able to use my recipe if you don’t already have a pre-loaded context in which to interpret the symbols. We have to share the same language and experience sufficiently for you to properly interpret the symbols in order to produce my intention. In the second case, no contextual interpretation is necessary. You simply possess the sandwich I sent you.

    Symbols certain can be arranged by the sender with intention for a certain outcome, but the symbols are not self-containers of information. They require a sufficient receiving context for actual information transfer. So maybe “self-container” vs “container” is my quibble. Symbols “contain” information within a context but do not “self-contain” information. I would put DNA in the first category. I would put an entire cell in the second category.

  127. 127
    ET says:

    They require a sufficient receiving context for actual information transfer.

    They aren’t symbols without it. And seeing that symbols did not invent nor define themselves clearly their information is not self-contained.

    As for DNA, well DNA is nothing without the genetic code and everything else to transcribe, edit, process and translate it.

    As for sending someone a sandwich, that too requires information. You need to know if the person receiving the sandwich has any allergies or specific diet. Don’t want to send a BLT to an Islamic Cleric or else the meaning may be misconstrued.

  128. 128
    Mung says:

    Just think of all the information these two lowly symbols contain!

    0

    1

    If you really stop to think about it, these are not so much carriers of information, but rather represent the making of a choice. In representing which choice was made, in that way, they could be said to carry information (about which choice was made).

    But that’s certainly not information in the normal sense of the word.

  129. 129
    ET says:

    Without information symbols are merely marks or noises/ sounds. When symbols are created they are endowed with the information agreed upon by the designer and the people/ person who will be using it.

    Mung’s 0 and 1 only represent a making of a choice if that was already agreed upon. Otherwise they just represent the numbers zero and one.

    And carrying information about what choice was made is definitely information in the normal sense of the word.

  130. 130
    Dionisio says:

    mike1962 made a valid point:

    In most cases symbols represent certain information if they are set according to the same protocol used by both the producer, sender, transmitter and the receiver of the symbols.

    In Mung’s example @128, the symbols 1 and 0 represent tiny electric impulses of predetermined amplitudes. Those impulses are used to activate and operate the microprocessor logics which is designed based on a binary code, not on the actual physical value of the electric impulses. However, the amplitude and frequency of the electric impulses must be according to the circuit parameters that correspond to the activation and/or resetting of the gates that conform the logic tables that map the outputs resulting from the different possible input cases handled by the designed circuits.

    In embryonic development, the fascinating morphogenesis seems like a process where molecular signaling profiles –aka morphogen gradients– are formed following a complex choreography that guarantees the correct interpretation by the receiving cells, thus determining their individual fate. The localization/delocalization of the morphogen sources, their secretion rate, the type of morphogen, their different modes of transportation, their degradation rate, all that affects the resulting organs or tissues. Complex functionally specified informational complexity on steroids.

    When those of us who have spent years working on software development for complex engineering design projects, watch in awe the marvelous cellular and molecular choreographies orchestrated within the biological systems, we are humbled at the realization that the work we used to be so proud of suddenly looks like LEGO toys for toddlers.

  131. 131
    ET says:

    “There are 10 type of people in this world- those who understand binary and those who don’t” – anonymous “nobody” from the ARN forum

  132. 132
    Dionisio says:

    ET,
    I see your point too. Thanks.
    BTW, where are the other 8 types? 🙂

  133. 133
    ET says:

    So what kind of information does DNA contain, Shannon information or specified information [colloquial use]? Mere complexity or specified complexity? The answer is- both.

    First, DNA certainly does have a quantifiable amount of information carrying capacity as measured by Shannon’s theory. …

    …Thus molecular biologists beginning with Francis Crick have equated biological information not only with improbability (or complexity), but also with “specificity”, where “specificity” or “specified” has meant “necessary to function.”

    Thus in addition to a quantifiable amount of Shannon information (or complexity), DNA also contains information in the sense of Webster’s second definition: it contains “alternative sequences or arrangements of something that produce a specific effect.”…

    That’s from Meyer, Stephen C.; “Signature in the Cell”, pages 108-109

    And that is what Spetner was talking about- Shannon gave us a methodology to measure it and function is an instance of information in its colloquial use.

  134. 134
    Mung says:

    Yes, that quote from Meyer is great. It highlights the fact that if you’re going to talk about ‘information’ and DNA you need to be clear about which definition of information you are using.

    So Bob O’H was right all along.

  135. 135
    ET says:

    Mung- Yes, Bob O’H was right in that Spetner did not define “information” in his linked article. However Spetner did define it in his books. He even mentioned Shannon’s link to genetic information.

    Also Meyer’s quote proves that you were wrong about me. But don’t worry I don’t expect you to apologize for that. I know that is not your style.

  136. 136
    Mung says:

    Even in his books Spetner does not define the sense in which he is using the term. So it’s no surprise that he doesn’t do so in his article either.

  137. 137
    ET says:

    “Not By Chance” chapter 2 Information and Life, covers it, Mung. And in the Preface he introduced Information Theory as a quantification process. “the Evolution Revolution”, chapter 1 Evolution and Information covers it also.

  138. 138
    Mung says:

    > And in the Preface he introduced Information Theory as a quantification process.

    We already know that the information theory definition is one definition of information. So we haven’t gained any information by reading Spetner.

  139. 139
    ET says:

    Do you know what the word “and” means, Mung?

    Information theory provided a way to measure information carrying capacity (Meyer). Functionality provides us with meaning, ie information in the normal sense.

    So what kind of information does DNA contain, Shannon information or specified information [colloquial use]? Mere complexity or specified complexity? The answer is- both.

    Do you understand what he means by that?

    So you have IT on one hand and then everything else Spetner says about information on the other. And then combine the two.

  140. 140
    Dionisio says:

    Isn’t the complex functionally specified information seen in the biological systems beyond the scope of the Shannon information concept?
    Aren’t those different categories of informational complexity associated with procedural functionality?
    Isn’t it like mixing classical and quantum mechanics?
    Or maybe it’s even worse than that?
    Perhaps one can measure certain aspects of biological information, as GP has done so well in his recent OPs about the functional information jumps in proteins. But that’s not the whole enchilada in biology.
    How do we measure the complex functionally specified informational complexity associated with morphogenesis, gastrulation, neurulation?
    Are you guys joking?
    Perhaps it’s tempting to measure things when doing reductionist bottom-up reverse engineering approach to research. But that’s just a very limited descriptive measurement which leaves complex functionality out of the big picture.

  141. 141
    Dionisio says:

    Not everything observed in scientific research is quantifiable/measurable by the currently available methods. Newer measuring approaches must be realized. Or humbly accept the fact that we won’t be able to rationally measure all the processes we observe. Certain things may remain unmeasurable/unquantifiable by our intellectual capacity.

  142. 142
    Mung says:

    > Do you know what the word “and” means, Mung?

    Like the word ‘information’ it probably has more than one meaning.

  143. 143
    Mung says:

    > Information theory provided a way to measure information carrying capacity (Meyer).

    I do wish you would make up your mind ET.

    Does information theory give us a quantitative measure of information or does it give us a quantitative way to measure information carrying capacity?

    The two are not the same.

  144. 144
    ET says:

    Mung,

    Using Shannon’s methodology we can measure the number of bits in any message, ie Gitt information. Meyer’s point about information carrying capacity is that if we don’t know the meaning we can still quantify what’s there. Meaning even if we are receiving (seemingly) random characters we can still quantify it. Then at least we know the maximum length of any possible Gitt information it contains.

    That said if we do know the meaning the quantification process is already started. You know the maximum possible number of bits of Gitt information present.

    Does information theory give us a quantitative measure of information or does it give us a quantitative way to measure information carrying capacity?

    Again, the two are not mutually exclusive.

    We look at a sequence of DNA and we can calculate the information carrying capacity. From there we see if said DNA codes for a protein. If so then we know we are working with Gitt information but we can still use Shannon to quantify it. From there you can narrow it down via data compression and redundancies in sequences.

    So what kind of information does DNA contain, Shannon information or specified information [colloquial use]? Mere complexity or specified complexity? The answer is- both.

    Say you have an important Gitt message to send but you had a limited number of bits. You would use Shannon’s methodology to see how long your message was so you could tell if it met the transmission length.

    The machines don’t care about the meaning just the length. The sender and receiver care about the meaning.

  145. 145
    ET says:

    Dionsio:

    Isn’t the complex functionally specified information seen in the biological systems beyond the scope of the Shannon information concept?

    No. We can calculate the number of bits in any given gene using Shannon’s methodology. We then compare that to the number of possible sequences to achieve the same end. From there we check the similarities of those sequences. Then we recalculate the probabilities given the possible number of sequences of that length vs the number that give you protein x for function y.

  146. 146
    Dionisio says:

    ET:

    There is more complex functionally specified information beyond the genome. Genes are just part of the whole enchilada.

  147. 147
    ET says:

    Agreed, but we measure what we can

  148. 148
    Dionisio says:

    ET,

    Yes, that’s a valid point.

  149. 149
    ET says:

    The point being, of course, is that the genetic information we can measure is material: DNA, RNAs, proteins. Materialists think they have the monopoly on material things. Being able to measure the amount of functional, material, genetic information required for the most basic living organism body slams that notion.

    And a new slogan is born:

    Intelligent Design- Using material information to defeat materialism one bit at a time

  150. 150
    PaV says:

    ET:

    The beauty of Hartley’s derivation of his “quantitative measure of information” is that it is simple, and widely applicable.

    Shannon’s measure is geared directly to electronic communication, and is defective in the sense that it doesn’t adapt readily to what we, as humans, know to be information—as in, e.g., this sentence I just wrote.

    Being that Hartley’s derivation is prior in time to that of Shannon’s lends it a certain authority. Further, that it is a definition given a meaning before ever there was a Creationist debate, or before ‘information’ was ever considered an attack on neo-Darwinism, makes it a good, and sufficient definition of information.

    Here at UD, we would be wise to find an effective way of using it. It is a more basic, and intuitive measure of information.

  151. 151
    Mung says:

    ET: Using Shannon’s methodology we can measure the number of bits in any message, ie Gitt information.

    It just keeps getting better and better. iirc, Gitt defined seven different kinds of information. Shannon’s was only one of them and most certainly does not apply to all of them.

  152. 152
    Bob O'H says:

    PaV @ 150 –

    The beauty of Hartley’s derivation of his “quantitative measure of information” is that it is simple, and widely applicable.

    As long as you have equiprobability. Once you lose that, I’m not sure if it’s useful.

  153. 153
    Mung says:

    Hartley information
    Shannon information
    Dembski information
    Spetner information
    Gitt information
    ET information

    Will it ever end!

    🙂

  154. 154
    ET says:

    PaV:

    Shannon’s measure is geared directly to electronic communication, and is defective in the sense that it doesn’t adapt readily to what we, as humans, know to be information—as in, e.g., this sentence I just wrote.

    Why not? We can use Shannon’s methodology to calculate the number of bits it contains.

  155. 155
    ET says:

    Mung:

    It just keeps getting better and better. iirc, Gitt defined seven different kinds of information. Shannon’s was only one of them and most certainly does not apply to all of them.

    Evidence please- try making your case.

    Anything that can be broken down into bits means that Shannon’s methodology applies.

  156. 156
    ET says:

    Mung:

    Hartley information
    Shannon information
    Dembski information
    Spetner information
    Gitt information
    ET information

    Will it ever end!

    You forgot Mung misinformation…

  157. 157
    PaV says:

    Bob O’H:

    As long as you have equiprobability. Once you lose that, I’m not sure if it’s useful.

    How, where, and what ways, is ‘equiprobability’ lost?

  158. 158
    PaV says:

    ET:

    Why not? We can use Shannon’s methodology to calculate the number of bits it contains.

    Shannon specifically cites his reliance on Hartley’s measure of information. Shannon’s measure is useful, no doubt. But it is principally concerned with electronic communication, and is not built upon the notions of human communication, per se; whereas, Hartley’s starting point is always ‘human’: in basically using the alphabet as the source of ‘symbols,’ and the telegraph operator as the ‘selector.’

  159. 159
    Mung says:

    PaV: How, where, and what ways, is ‘equiprobability’ lost?

    When events are not equally probable.

  160. 160
    Bob O'H says:

    PaV @ 157 – letters in the English language are one example.

  161. 161
    PaV says:

    When it comes to nucleotides, they’re not too far away from equiprobability. One can adjust for this lack of equiprobability, if one chooses. But, to use Hartley’s formula gets you close enough to what you need to know–unless you’re not interested in ‘knowing.’

    Newton’s formula for gravitation is not entirely correct. But, it’s a great approximation, and saves one from the tedium of using
    Einstein’s equations to solve for the same thing. They don’t use General Relativity when sending probes out into space, but simple Newtonian gravity. It’s close enough. Same here. Again, unless you don’t want to concede a thing.

Leave a Reply