Uncommon Descent Serving The Intelligent Design Community

Michael Denton on Mathematics and Stardust

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

I’m not quite sure who Michael Denton is.

I’ve read his two books, Evolution: A Theory in Crisis and Nature’s Destiny.

It was Crisis that first inspired me to exclaim to myself, How could you have been so stupid as to have been duped into believing this transparent Darwinian-gradualism-and-random-mutation-natural-selection nonsense?

In Destiny he presents some remarkable insights, not just about the fine tuning of the laws of physics, but about the remarkably fine-tuned properties of water, the carbon atom, light, and much more, for the eventual appearance of living systems.

For Denton’s comments about stardust see here.

For his comments on mathematics see here.

So far, ID theory has addressed two primary domains: cosmology and biology. However, I believe that Denton elucidates another area of ID interest, and that is mathematical ID.

How is it that the laws of physics and so much of physical reality can be represented by mathematics? As Denton explains, humans did not invent math, it is built into the nature of things and was discovered. How is it that random mutations filtered by natural selection produced the human mind that can discover not only the beauty of math, but its application in the description of how things work?

It was as a result of the observations presented above, and many more, that I finally decided I could no longer muster up enough blind faith to be an atheist. The only rational conclusion I could reach is that it’s all the product of design, by an indescribably powerful and creative intelligence.

The reason I say that I’m not quite sure who Michael Denton is, is that he appears to be some kind of “vitalist.” I’m not quite sure what that means, but he certainly has no theological axe to grind.

No matter what you might think about Michael Denton, he is certainly not a mindless, knuckle-dragging, uneducated, science-destroying Christian like me.

Comments
lol She can do it at her blog, but not here. Does this remind anyone of MathGrrl? Mung
Upright BiPed: I said I am happy to do this at my blog. I have given you the link. To say that I have "blown off the simulation" is quite unwarranted. I look forward to your first post at my blog. Elizabeth Liddle
Hasn't she demonstrated the generation of information by choosing not to demonstrate the generation of information? Mung
With no indication otherwise, it seems Dr Liddle is blowing off the simulation. She has also ignored the retraction of her falsified claim. Nothing else could, nor should, be expected. - - - - - - - (I'm out for the weekend...) Upright BiPed
Born Again, for once I am really enjoying your links. Before reading this entire thread, which I am probably going to do, I would like to say that I agree with Denton's thinking, and that he has overcome the human tendency to anthropomorphize God. It is obvious that he believes in an awe inspiring eternal mind, but he does not ascribe to human dogmas. avocationist
I love olives. Mung
Olives anyone? Dr Liddle if you would like to test my ability to make my case all over again, I will happily oblige you. If however, you consider a ten-week long conversation as enough, then I am happy to return to the point in this conversation where I accepted your description, and only suggested that we work out a couple of issues. That description is as followes:
LIDDLE: Starting only with non-self-replicating entities in a physics-and-chemistry (plus random kinetics) environment, self-replicating “virtual organisms” can emerge that contain arrangements of virtual matter represented as strings that cause the virtual organism to self-replicate with fidelity, and thus determine the output of that system, namely a copy of that system. The arrangement must produce its output by means of an intermediary “virtual object”. This “virtual object” must take the form of a second arrangement of “virtual matter” that may interact with the strings and with some other “virtual object” that affect the fidelity of the self-replication of the “virtual organisms” without either permanently altering, or being altered by, the interaction.
and my repsonse...
BIPEDIn deference to your many attempts to integrate the requirements of information into your simulation, I suggest that we work from this definition of yours, but I take a couple of exceptions. My priorities have always been to include the existence of specific objects (discrete representations and protocols) and specific dynamics (the discrete-ness of the objects, the break in the causal chain, and the resulting effect – which is the output of the system being the replication of the system itself). As far as your first paragraph above, I have no particular problems. What this tells me is that you will produce a system that is reproducing copies of itself by means of “arrangements of virtual matter represented as strings” which determine the output of the system – a copy of itself. In this paragraph you used the word “represented” which you otherwise objected to when I used it, but I will not throw up a fuss. No one interested in this exercise will misunderstand its use, and if they do, it won’t change the results of your simulation. Therefore, this paragraph establishes my priorities of representations and output, but says nothing of protocols or the break in the causal chain. For this we turn to your second paragraph. In your second paragraph you say: The arrangement must produce its output by means of an intermediary “virtual object”. This “virtual object” must take the form of a second arrangement of “virtual matter” that may interact with the strings and with some other “virtual object” that affect the fidelity of the self-replication of the “virtual organisms” without either permanently altering, or being altered by, the interaction. This paragraph is made up of two sentences. The first of these introduces the protocol as an intermediary virtual object, which is perfectly fine, and leaves only the critical dynamic relationship to be established. This is where I take my exception, and I suggest that we leave the other parameters alone, and focus on dealing with this final piece. The observations regarding the dynamics of the protocol are critical. Without them, nothing else can follow, or in the case of the representation – it couldn’t even exist.
Now as I said, this definiton of yours is acceptable with some fine tuning. It is a bit clunky though, and is certainly no better than the earler one I suggested above. In any case, the ball is in your court. Upright BiPed
UPB @ 251, I call that tactic "Argument by Attrition". Ilion
Perosnally, I think this is a waste of time, given that for the past ten weeks we did this exact same thing, and we will surely end back here at this exact same point - and you f'kin know it. But hey, anything to save you from having to admit a mistake. Hell will freeze over first. Upright BiPed
"I’m ready right now UPD." Good. Do nucleotides directly template proteins? If they do not, then from a dynamic point of view, how does the nucleotide input constrain the amino acid output if they have no physical association? Upright BiPed
Ah, I see we cross posted... Upright BiPed
Dr Liddle, Looking at your positioning statement again, and remembering … after going through all the observations where you repeatedly said things like “Great” and “I Like it” etc etc, you may now need to position me as a ‘kook’ who is out of touch with the evidence, but I think you’ll have a hard time. In any case, the evidence is the last place you want to for an exit from this conversation, so it should be no surprise that I should serve as your scapegoat. That is no one’s decision but your own. Odd, though. Every single point I made in discovery; you fought it, you thought about it, and then eventually organized it into your own understanding. When it was the “representation” you had problems with, we fought our way through it, and it became an “arrangement in matter”. When it was a protocol that stopped the advance, we simply talked our way though it until we had a “physical object”, a facilitator, subject to physical law. Even the terror of having a dynamic “break in the causal chain” was summarily discussed until the pieces came together as a “dissociated link”, as you referred to it. One by one, each of the required objects and their critical dynamic roles came into focus. I can cut and paste every single instance where the objects and dynamics were recognized as both legitimate and accounted for. All I am asking you to do now - is to get on with it. - - - - - - - - - - Or, as a gesture of simply recognizing the obvious, you could retract your mistaken claim that ID proponents can’t make a case for ID. Your very involvement (in trying to build a simulation to refute the case for ID given to you) is a real-time unavoidable falsification of that claim. Upright BiPed
Any time you are ready to challenge the observations, then I am more than happy to oblige.
I'm ready right now UPD. You provide the observations and I'll challenge them. But I suggest we do it here: http://theskepticalzone.com/wp/?p=1 This thread takes an unconscionable time to load. Elizabeth Liddle
Did I come in first, or was it merely an honorable mention? Mung
Template http://en.wikipedia.org/wiki/RNA http://en.wikipedia.org/wiki/Transcription_%28genetics%29 Does transcription demonstrate information? Mung
"However, I am fairly convinced that you have somehow dug yourself into a belief system whereby you are so convinced that your argument makes sense that rather than examine it for circularities and inconsistencies" Nice positioning statement. Any time you are ready to challenge the observations, then I am more than happy to oblige. Upright BiPed
Upright BiPed:
Dr Liddle at 221, Elizabeth, this is becoming (has long since become) an unnecessary struggle to get you to agree to what you already know the observations demonstrate to be the case. This façade where you are trying to get me to agree to a definition, or a methodology, is patently disingenuous – as can be observed from the comments you make here and elsewhere.
So you are back to accusations of dishonesty. Ah well. Disappointing though.
In post 221 you cut and pasted a partial description I gave elsewhere that describes the physical objects and some of the observed dynamics of information transfer, then you turn right around and want me to agree to a model of direct templating as a demonstration of information?
No, UBP, as I have said several times, I accept that you do not accept "direct templating" as a demonstration of information. Which is why I have specified non-direct templating in my specification, i.e. an inert intermediate object that effects the translation from the pattern in the polymer to the functional object.
And I suppose this is the point where I am supposed to rehash the entire description and go point by point as to why the direct templating video is not sufficient as a demonstration of information transfer (even if you can simulate it) – but I am not going to do it. After ten weeks and tens of thousands of words, it is proven to be a fruitless exercise.
What? What "direct templating video"?
In our last exchange I posted some of the working definitions we had gone through and decided to adopt your last description as a valid starting point in order to finish up, and I did so because you captured some of the key points while some others needed to get into the definition with a little more clarity. These other points are not ones that you don’t understand, indeed you understand them completely; they just needed to be made more clear so that your simulation would be a usable demonstration of that which you intend to demonstrate.
Which I did.
Let me ask you a question Dr Liddle. The video of direct templating which demonstrates absolutely nothing of a dissociated representation and absolutely nothing of a dissociated protocol – the model you asked me to accept – would you accept it, Dr Liddle? If you were to succeed, you would have demonstrated for the first time that information (representations inertly coordinated to protocols) can emerge from a system of chance and necessity. Do you think even for one minute that such a demonstration wouldn’t be immediately proven false by the facts?
I have no idea what you are talking about. We seem to have watched different videos.
Look it – there is a description of information transfer that has surfaced in this conversation that you yourself helped to inspire. It gets right to the point with an economy of words and leaves out all the ambiguities that you keep returning to – those same ambiguities you say you want reduced to zero. (And as far as a set of operations to actually verify the presence of information – that methodology was well known before you were born, so that’s not an issue unless you intend to continue to ignore it). That description is this: Starting only with non-self-replicating entities in a physics-and-chemistry (plus random kinetics) environment, self-replicating “virtual organisms” can emerge that contain dissociated representations embedded in the arrangement(s) of matter. These arrangements represent the system that created them, and will determine the output of that system by means of an intermediary “virtual object”. Without becoming incorporated, this object may interact with either the representation or output, or both, but where the two remain whole and physically separated. 1) Dissociated = having no physical relationship to that which it represents. 2) Intermediary = serves the dynamic purpose of allowing the input representations to determine the output while they each remain discrete, a facilitator. This is the description I am willing to stand by. Otherwise I can break off and let this conversation morph into a conversation about CSI, and replication fidelity, and RNA polymerase. You can then visit among those who think as you, and blame it all on me as the one who was afraid to put their money where their mouth is. I am certain you’ll get no push back whatsoever. To the contrary, you’ll be heralded as a queen of empiricism. All you have to do is sleep with yourself.
And you have addressed none of the points I raised with regard to this. Upright BiPed, I am tempted, at this point, to mirror your own attitude to me, and conclude that you are deliberately putting roadblocks in the way of any demonstration of my claim. However, I will not. The reason I will not, is that I genuinely do not believe that you are deliberately setting up road-blocks. However, I am fairly convinced that you have somehow dug yourself into a belief system whereby you are so convinced that your argument makes sense that rather than examine it for circularities and inconsistencies, you jump to the conclusion that someone (me) who does not share your view must be being obtuse or dishonest. So does Ilion, so does Mung. I find the arrogance quite extraordinary. Repeatedly, you, Ilion, and Mung have cast aspersions on my honesty and/or intelligence rather than even consider, for a moment, the possibility that I might actually have a point.
(I noticed once before that you were told the idiots on this site “would like to stone you to death” if we “had the chance”. Your response to this pathetic accusation was more polite conversation. Given that you now face having to admit that ID proponents can make a valid case, I can assume politely misrepresenting my efforts in this conversation would be something of a walk in the park for someone with your disposition).
[censored] I will not respond to this until I have had a chance to regain my equipoise. Elizabeth Liddle
Mung, I'll take another look at that paragraph tomorrow to make sure it even says what I intended; my brain isn't functioning properly at the moment, either through overuse or neglect, I haven't decided. Thanks again. material.infantacy
Mung, I suppose I should have said "specified and complex." Good catch, thanks! material.infantacy
material.infantacy:
if necessity can produce the sequences, then they’re not “specified” they’re inevitable — explicated by law.
They are specified, simply, (e.g., F=ma), but not complex. Or, as a sequence, 1..1000000.each {|i| puts i} But it's quite difficult to represent information with such a sequence :) Mung
I appreciate the sentiment UB, no apologies necessary. I think everything I proposed is summed up in #231 with some minor additions in #232. However I don't have much more to add at this point, so I'll just say that I chose RNA polymerase because it begins the translation/transcription process, and must itself be encoded in the DNA. This exposes a circularity and a "search squared" issue. It would also appear that no protein can be sequenced without it. So of any protein that would need to be present right off the bat in a functional system, it's a decent candidate (and DNA polymerase for replication). Anyhoo, I wanted to point out to EL that there was a search issue and a specification issue, and I've done that as well as I know how. I don't expect to be adding much more, if at all. Looking forward to reading you again in the future. Best, m.i. material.infantacy
By the way, MI, the RNA polymerase is indeed a fantastic object, but it does not cause the sequence to exist as it does, nor does it allow a representation to exist. On the other hand, the tRNA does not set the sequence either, but, as a protocol, it does allow a discrete representation to exist. Upright BiPed
MI, I have no problem at all with your participation. I should have addressed your comments earlier, but I wasn't trying to expand the conversation, I was trying to constrain it to the observations we started making two months ago. I was mistaken in not doing so. My apologies. Upright BiPed
UB, if I made things more difficult for you with my intrusion, I apologize. But I felt I had to get a few points across. material.infantacy
"... patently disingenuous ..." It seems there is yet another convert to the "you know, she really is intellectually dishonest" camp. Ilion
Dr Liddle at 221, Elizabeth, this is becoming (has long since become) an unnecessary struggle to get you to agree to what you already know the observations demonstrate to be the case. This façade where you are trying to get me to agree to a definition, or a methodology, is patently disingenuous - as can be observed from the comments you make here and elsewhere. In post 221 you cut and pasted a partial description I gave elsewhere that describes the physical objects and some of the observed dynamics of information transfer, then you turn right around and want me to agree to a model of direct templating as a demonstration of information? And I suppose this is the point where I am supposed to rehash the entire description and go point by point as to why the direct templating video is not sufficient as a demonstration of information transfer (even if you can simulate it) – but I am not going to do it. After ten weeks and tens of thousands of words, it is proven to be a fruitless exercise. In our last exchange I posted some of the working definitions we had gone through and decided to adopt your last description as a valid starting point in order to finish up, and I did so because you captured some of the key points while some others needed to get into the definition with a little more clarity. These other points are not ones that you don’t understand, indeed you understand them completely; they just needed to be made more clear so that your simulation would be a usable demonstration of that which you intend to demonstrate. Let me ask you a question Dr Liddle. The video of direct templating which demonstrates absolutely nothing of a dissociated representation and absolutely nothing of a dissociated protocol - the model you asked me to accept - would you accept it, Dr Liddle? If you were to succeed, you would have demonstrated for the first time that information (representations inertly coordinated to protocols) can emerge from a system of chance and necessity. Do you think even for one minute that such a demonstration wouldn’t be immediately proven false by the facts? Look it – there is a description of information transfer that has surfaced in this conversation that you yourself helped to inspire. It gets right to the point with an economy of words and leaves out all the ambiguities that you keep returning to - those same ambiguities you say you want reduced to zero. (And as far as a set of operations to actually verify the presence of information – that methodology was well known before you were born, so that’s not an issue unless you intend to continue to ignore it). That description is this:
Starting only with non-self-replicating entities in a physics-and-chemistry (plus random kinetics) environment, self-replicating “virtual organisms” can emerge that contain dissociated representations embedded in the arrangement(s) of matter. These arrangements represent the system that created them, and will determine the output of that system by means of an intermediary “virtual object”. Without becoming incorporated, this object may interact with either the representation or output, or both, but where the two remain whole and physically separated. 1) Dissociated = having no physical relationship to that which it represents. 2) Intermediary = serves the dynamic purpose of allowing the input representations to determine the output while they each remain discrete, a facilitator.
This is the description I am willing to stand by. Otherwise I can break off and let this conversation morph into a conversation about CSI, and replication fidelity, and RNA polymerase. You can then visit among those who think as you, and blame it all on me as the one who was afraid to put their money where their mouth is. I am certain you’ll get no push back whatsoever. To the contrary, you’ll be heralded as a queen of empiricism. All you have to do is sleep with yourself. (I noticed once before that you were told the idiots on this site “would like to stone you to death” if we “had the chance”. Your response to this pathetic accusation was more polite conversation. Given that you now face having to admit that ID proponents can make a valid case, I can assume politely misrepresenting my efforts in this conversation would be something of a walk in the park for someone with your disposition). Upright BiPed
EL, I'd like to reiterate that I did rush a little through that last post, so if you think I glossed over anything, taken in reference to my #231, let me know and I'll try and get back to it. m.i. material.infantacy
Elizabeth, continuing my comments on your comments on my comments. xp ...
”But I am concerned about your point (especially given the reaction to my posting of the clock video) that were I to do so, by the method I (transparently) propose, I would have “smuggled in” design.”
My apologies if my own reaction to the video seemed a little dismissive, but it was presented by the author in a somewhat cocky and disdainful manner, which made the whole “bluff” a chore to watch, and I quickly lost interest. Let me also clarify that calling something a “toy” is not entirely an insult in my book. Toys are wonderful and delightful things, as are stories of many sorts; and I can hardly do their significance justice in this post. I just see no parity between GAs and this issue, as my previous points should make clear. However a GA is no better than a blind search unless both the fitness function and the “variation engine” have some idea where they’re going. I know you’ll probably disagree with this characterization, and it may not hold true in all cases; and I’ll refrain from commenting on GAs further, as I think the subject has little relevance to the scope of the problem at hand.
”This seems to me to get to the heart of the issue that I was getting at when I made my original claim. My position is that Darwinian process can not only generate Information, and, indeed design things i.e. produce things that clearly serve some sort of function to something (preserve an organism in existence, for instance, or cause a populationi to persist in a changing environment) but does so in a manner that is directly analogous to the way our own minds work.”
This may be true, but I think this claim would need to be substantiated rigorously under a physical simulation, or preferably by empirical demonstration. As I suggested in my last post, necessity negates specified complexity. If it could be demonstrated that actual necessity made SC unnecessary, we could all go home.
”However, if other people don’t find that a stumbling block – disagree with you, in other words, that my proposed simulation (which would, in effect, by a program that does what that Szostak animation does), if successful, would not have supported my claim, then I am happy to proceed.”
Agreed. If that’s the case, I don’t have any skin in the game. xp And I’d be happy to see how your project proceeds and concludes. I hope you understand there is no ill will here, only a need to expose what I think are the core relevant issues.
”Well, I figured it would be technically simpler to use a tRNA analog. And I was not proposing to emulate anything as complex as RNAP. In other words, my cells would probably just translate anything translatable in the genome string. Nonetheless I am interested in your point: ... I’m not quite sure about what you are asking re circularity, here, MI, although I agree that there are chicken-and-egg problems inherent to any OOL theory.”
I think I made my point in the previous post, but if there’s more I need to say, feel free to bring it up again. I’m asserting that the specified complexity issue relates to the OOL issue in regards to the chicken-egg problem, because of the need for a distinct specification, and it’s distinct product (RNAP and arguably DNAP) and their need to coexist.
”I’d find myself with “lipid” vesicles, as per Szostak, containing self-replicating polymers. You don’t need any fancy enzymes to produce a self-replicating sequence, you just need a chemistry in which single strand polymers tend to form, with spare binding spots on each monomer that will attract its opposite number, resulting in double chains. If something (e.g. a temperature change) results in a splitting of the double chain, you have two singles with complementary sequences. Let these loose in a sea of monomers and they will become matching double chains again.”
I do find this interesting, but I’ll fall back to what I’ve already stated in my previous post. Skipping forward over some engaging descriptions... and then some additional comments on GAs...
”But here there is no fitness function provided by me at all – the only fitness function is that intrinsic to any population of self-replicators, namely anything that, in that environment promotes self-replication. Which may be greater permeability of the vesicle or less; it may be greater length of polymer or less; it may be resistance to division or greater potential for division.”
Again I’ll suggest that I think the bar is high -- that there can’t be any smuggled specification, nor any contrived necessity, and there should exist a specification that exists independent of its product. Sorry if I’m missing the point of your description.
”The point is that as I don’t know, at any given point (because the environment itself will be constantly changing, not least by the products of the critters themselves), what will best promote longevity and/or division, I can’t have “smuggled it in”! If you disagree, can you explain why?”
I think what I’ve stated already covers this, but I’m often myopic in my approach to things. So if there’s something critical that I’ve missed, or that answers my concerns, just point me back to it and I’ll do my best to answer. My apologies if this follow up response seemed less than satisfying. In my defense, I did read all of your recent posts before composing my first response, which I felt did some justice to the comments you made. Take care Elizabeth. The bloke, m.i. material.infantacy
Elizabeth, thanks for spending some time with my previous comments. I’ve made several more, after making a pass through your responses. I’m not obligating you to respond to everything -- it’s too wordy and disorganized -- I just needed to “get this out there” for my own benefit, based on some of your comments. (I promise no more “sarcastic disillusionment”). It gives others, particularly ID proponents, a chance to comment, refute, modify, nullify what I’m saying, so I can rethink or expand my observations about this. Apologies in advance for bad grammar, excessive word and phrase reuse, and for restating things I’ve already said.
”Because if your view is widely shared, MI, the definitional problem is not a definition of information (which we seem to have got, essentially) but with the definition of evolutionary search.”
I suppose that’s entirely so. I can’t say my view is widely shared -- but yes, I think that if you’re going to evolve “information,” that is, “specified complexity” that your simulation should address the problem of how one finds function in a sea of configuration space. I’ve been suggesting all along that if you’re not searching for a function, you’re specifying one -- and I’ll go on to say that if you’re generating sequences based on a virtual concept of “necessity,” that you’re not generating specified complexity at all. This is because SC is the absence of necessity; if necessity can produce the sequences, then they’re not “specified” they’re inevitable -- explicated by law. To paraphrase myself from another thread, “Specified complexity is the contingent prearrangement of materials which correspond to technological sophistication.” Again, If the functions are generated via necessity, then they’re not contingent -- they’re inevitable, and hence unspecified. Contingent prearrangement suggests that the sequence is A) highly improbable; B) corresponds to a function for which the probability of arriving at that function by chance is less than 10^-150. This implicates a mind, in my view, and I suggest in the view of others here. The only other way to skirt this is to implicate necessity.
”What I set out to do was something much simpler – to demonstrate that Information (by any definition, pretty well, certainly any definition used in ID claims) could be generated by Chance and Necessity.”
Amidst all my rambling in this post, let me try to put this as a core concept: Specified Complexity is the absence of necessity; AND the absence of chance -- that is, contingent arrangements of a sequence of characters for which the probability of finding that particular sequence is less than 10^-150. If you’re truly going to demonstrate the role of necessity in the evolutionary generation of specification, then you would NEED to model physical reality. One can’t assume necessity in order to demonstrate its efficacy in this situation, I’m asserting. So the core question in my mind is this: how do you generate specified complexity absent necessity and chance? If you make your own rules for necessity, then you disqualify the results. If you instead promote chance, you need to find the function in a vast configuration space, which I suggest is computationally impossible. I realize that this might seem unfair, that the bar is being set too high. But I can see no other way of addressing the problem -- a real problem -- and not something which can be overcome with several thousand lines of code.
”I need to clarify something: I did not set out to model the Origin of Life as We Know It.”
I realize that. I’m suggesting however, that there is at least one intersecting issue between the OOL problem and what you propose (the blind generation of information), and that’s the simultaneous presence of these three things: 1) The DNA which contains the message indirectly corresponding to the product, RNAP (Thanks for the acronym; I collect them. xp); 2) the product which is produced -- that is, RNA polymerase (RNAP); 3) the product which does the translating between the DNA and the product, in this case also RNA polymerase, which must exist before it can itself be translated. What should jump out at anyone examining the above is that the specification is required to produce RNAP, but that RNAP is required to produce itself by way of the specification. I’m asserting that this circularity is present in the concept of information itself, or more specifically, present in the concept of specified complexity. This is the chicken-and-egg problem that you refer to, and isn’t just some obscure aspect of OOL, it’s central to the entire question of OOL and specified complexity, at least that’s what I’m suggesting. I gave the answer to the riddle in a previous post on this thread: that RNA polymerase, apparently “gives rise to itself.” It must be present in order to produce itself. How do you get around this without a scenario that finds, simultaneously, the function AND the specification? Here’s how I chose to view the information problem: ”Information: the presence of *specified complexity* in a system exhibiting the *irreducibly complex* integration of independently designated parts , each of which are constructed at a bit depth in excess of 500. Specified Complexity: the specification for a functional system which has only an abstract association with the product it specifies, limited to acting as a template or archetype, from which the product it represents is instantiated by way of an intermediary. Irreducible Complexity: the requirement that in a functional system of integrated components, the removal of any single component causes the system to become non-functional.” Admittedly, the above needs some work, but it is the problem, as I see it.
”It may well be that modeling the origin of RNA polymerase is a fierce OOL problem and I certainly do not have the skill set to do it! Even supposing that that in particular proves to be the biggest problem in OOL research.”
I’m not actually suggesting a physical model of RNA polymerase, which is why I referenced it as a “black box.” I’m suggesting that conceptually, it needs to be present. As a black box, all of it’s intricacies and dependencies could, for the most part, be assumed I believe, without “giving away the store” so-to-speak. I’ll continue with more as I get the chance, I want to try and at least comment on your core questions and/or objections, as time allows. Thanks again, m.i. material.infantacy
grifter: a person who swindles stacking the deck :"Gamblers 'stack the deck' in their favor by arranging the cards so that they will win. Mark: A person who is the intended victim of a swindler; a dupe. junkdnaforlife
Elizabeth Liddle:
What I set out to do was something much simpler – to demonstrate that Information (by any definition, pretty well, certainly any definition used in ID claims) could be generated by Chance and Necessity.
Webster's first definition: 1 the communication or reception of knowledge or intelligence Information requires prior knowledge. Program that.
In order to understand information, we must define it; but in order to define it, we must first understand it. Where to start? - Hans Christian von Baeyer
Mung
Hello Dr Liddle, I see your post at 221. I will respond this evening. Upright BiPed
Hi Elizabeth,
"I think that is now virtually, and you concede that generating it in the manner I propose would not only be possible but trivial. I’m not so sure actually! Having set myself the burden not merely of doing it via a Darwinian self-replicator, but of demanding of myself that I start without any self-replicators at all, and first let them emerge."
Please don't misunderstand. I'm not trying to give the impression that your undertaking is (or would be) trivial. That word only applies to the generation of specified complexity sufficient to be measured against CSI. I have a great deal of respect for software engineering of all sorts, even that which I categorize as entertainment. (But I reserve a good deal of skepticism that computer simulations of the sort that demonstrate GAs are doing anything particularly remarkable, as software engineering goes.) The take home point about the "trivial" moniker is that we generate specified complexity regularly as a part of our existence. This paragraph contains specified complexity, and would measure positive for CSI. So it should surprise nobody that a computer program could be made to generate it also. Generating SC is trivial for us, as intelligent agents. I'll respond to other points and questions as I find time, hopefully later today. Thanks, m.i. material.infantacy
MI: taking these in somewhat chaotic order, apologies:
So I’m proposing that the target of your simulation should be analogous to the informational core of a living cell. It needs virtual DNA which codes for virtual proteins, and that at least one protein in particular needs to be present both in form and specification at the same time: the RNA polymerase analog. Now I think you can do away with tRNA as it’s just an adapter (as remarkable as its presence is) but it will need to be replaced with a virtual RNA polymerase. This can be a “black box” but it must be specified in the DNA, and it must also be present in the earliest form of the target proto-organism.
Well, I figured it would be technically simpler to use a tRNA analog. And I was not proposing to emulate anything as complex as RNAP. In other words, my cells would probably just translate anything translatable in the genome string. Nonetheless I am interested in your point:
I’m willing to bet you’re not going to accept this as valid, so I’ll try not to belabor the details too much. See this diagram (I hope the link works): A potential analog to the information core of a living cell. Ra and Rc represent RNA polymerase, with sequence length n, in both the abstract and the concrete forms. Rc is a black box which takes a strand of DNA as input and outputs a sequenced protein. In the proto-organism, it needs to be already present in order for the system to function (I might argue that so does DNA polymerase if we’re going to establish that the organism can validly reproduce). Rc must also be encoded into the DNA strand, and it needs to be the first thing that is, because without it, the clock can’t tick, so-to-speak. It functions as a file header of sorts, a protocol, with which to bootstrap the rest of the organism. If someone were to come across a strand of this DNA, they should be able to bootstrap it by decoding the header (the valid sequence for Rc) and assembling the enzyme, which could then catalyze the production of the other proteins coded for in the strand. I believe that at a minimum, something like this would need to be “evolved” in order to demonstrate that specified complexity can be generated via blind processes. There would still be questions that needed to be answered, such as: what should the sequence length n be for Rc (and Ra); and how is function determined. This is significant because, I suggest, that if you define a function, e.g., “permutation x of a sequence of length n is defined as the function for Rc,” then you’ve smuggled in specification; and if you determine function based on the actual sequence for RNA polymerase, then you’re left with the same impossible search for a function that I suggest is present in the OOL problem. If you define your own search for a function, the sequence length would still need to be long enough to be validly measured against CSI, and so you would still have a search problem — one that dwarfs the number of atoms in our universe multiplied by every Planck time quantum state that’s ever occurred in its history. Thanks much for your time, Elizabeth, The bloke, m.i.
I'm not quite sure about what you are asking re circularity, here, MI, although I agree that there are chicken-and-egg problems inherent to any OOL theory. So I'll explain what I was anticipating would emerge in my demonstration: I'd find myself with "lipid" vesicles, as per Szostak, containing self-replicating polymers. You don't need any fancy enzymes to produce a self-replicating sequence, you just need a chemistry in which single strand polymers tend to form, with spare binding spots on each monomer that will attract its opposite number, resulting in double chains. If something (e.g. a temperature change) results in a splitting of the double chain, you have two singles with complementary sequences. Let these loose in a sea of monomers and they will become matching double chains again. Combine this, as in the Szostak video, with dividing vesicles and you have a protocell. However, right now the sequence of the polymers is irrelevant to the survival and division of the whole. But because you now have the beginnings of a Darwinian-capable self-replicator, any sequence that does prove relevant (as Szostak suggests, if longer chains improve the chances of replication, then protocells with chains consisting of more common polymers will tend to replicate better), will be selected i.e. will tend to replicated more often, and become more numerous. So far no Information by UPD's definition, but approaching it by Meyer's Webster definition. Now let's say that some polymer sequences tend to eat their own tails as it were - have palindromic sequences that interfere with replication. Those will tend to be selected out. And let's say that some sequences tend to attract additional side-chains that break loose forming rings or twists (because some sequences will tend to to this) and that these objects do something that promotes longevity or successful division. I'm speculating here, precisely because I don't know. One of the things I hope my project will do is tell me what sequences promote longevity and division, because I'm not going to design them in! All I'm going to do is muck around with the starting chemistry so that there is a rich set of possibilities for my emerging critters to "explore". And one set I hope they will explore is that some polymer sequences will result in objects that themselves have binding properties that enable them to "read" other parts of the sequence by binding to certain combinations (like tRNA does to RNA) and offering a binding site at the other end for some other object to form and then break loose and do something useful. At that point, I will shout "Wahoo!!!" and tell UBP that we now have a polymer sequence on the polymer that "represents" a useful object via a protocol object that is itself a direct result of the polymer sequence. That's the part I'm not sure I can do, although I can see in principle that it should be possible. Now, your claim, I think, is that I'm necessarily going to be "smuggling" in something via a fitness function. Well, for a start, I dispute the idea that a fitness function in a GA "smuggles" in design in a way that is relevant to the design of living things - as I keep saying, the fitness function is the analog of the environment in which the population has to survive, not the analog of the Darwinian process itself. But here there is no fitness function provided by me at all - the only fitness function is that intrinsic to any population of self-replicators, namely anything that, in that environment promotes self-replication. Which may be greater permeability of the vesicle or less; it may be greater length of polymer or less; it may be resistance to division or greater potential for division. The point is that as I don't know, at any given point (because the environment itself will be constantly changing, not least by the products of the critters themselves), what will best promote longevity and/or division, I can't have "smuggled it in"! If you disagree, can you explain why? I should say that I am blatantly "designing" the initial physics-and-chemistry. So if that is people's only objection, fine - but then we are back to a fine-tuning argument, not an anti-Darwinian one, nor even an OOL one. Which is physics, and Not My Field :) Elizabeth Liddle
MI (again)! I have now had time to look at your long posts in detail. You raise a number of interesting points, but I need to clarify something: I did not set out to model the Origin of Life as We Know It. It may well be that modeling the origin of RNA polymerase is a fierce OOL problem and I certainly do not have the skill set to do it! Even supposing that that in particular proves to be the biggest problem in OOL research. What I set out to do was something much simpler - to demonstrate that Information (by any definition, pretty well, certainly any definition used in ID claims) could be generated by Chance and Necessity. Hence the focus on defining Information. I think that is now virtually, and you concede that generating it in the manner I propose would not only be possible but trivial. I'm not so sure actually! Having set myself the burden not merely of doing it via a Darwinian self-replicator, but of demanding of myself that I start without any self-replicators at all, and first let them emerge. However, I agree, in principle, that it would be easy enough to do. But I am concerned about your point (especially given the reaction to my posting of the clock video) that were I to do so, by the method I (transparently) propose, I would have "smuggled in" design. This seems to me to get to the heart of the issue that I was getting at when I made my original claim. My position is that Darwinian process can not only generate Information, and, indeed design things i.e. produce things that clearly serve some sort of function to something (preserve an organism in existence, for instance, or cause a populationi to persist in a changing environment) but does so in a manner that is directly analogous to the way our own minds work. So it doesn't surprise me that the products are similar! And only differ in ways in which the two things differ. So I'd really like to get to the heart of this issue. However, if other people don't find that a stumbling block - disagree with you, in other words, that my proposed simulation (which would, in effect, by a program that does what that Szostak animation does), if successful, would not have supported my claim, then I am happy to proceed. Elizabeth Liddle
But I’ll say for my own sake that, from what I can tell, I can only conclude that Elizabeth’s simulation will certainly generate CSI (no great feat, unfortunately; sorry EL) but it won’t do it blindly. It will need to assume the specification that it purports to demonstrate. This will be the result of avoiding “search” (the search for a function is a significant part of the problem here, AFAICT).
Interesting. I wonder who agrees. If people do, I'm glad we got here before I embark on my project! Because if your view is widely shared, MI, the definitional problem is not a definition of information (which we seem to have got, essentially) but with the definition of evolutionary search. So I guess we had better go back to there. Thank you for elucidating this point MI. Elizabeth Liddle
This post has been resubmitted with two less links in an attempt to thwart the mod filter. _______ I mentioned several days ago that I hoped to post a summary of my thoughts regarding Elizabeth’s proposal. And I’ve been struggling with it, to express myself better than I already had. I’ve kept quite a few notes; and I still may post a summary of them, but I think my part in this essentially ended at #127. So I decided instead to just post a recap of links to my (self-proclaimed) substantive comments, along with some links to others in context, in case anyone might find them interesting. Here it goes. I began my intrusion on this thread at #44, and added some substantive comments to that post in #51. Elizabeth kindly responded to #44 in #81 with several comments, amidst her involved, even heated conversations with Mung and Upright BiPed. Fearing my point had not been made, I followed up with a three part response beginning at #114 and ending at #116. Elizabeth responded briefly at #126, and I responded again briefly at #127 posing a question. At #149 I posed an addendum prompted by a comment that Mung made at #144. At #153 I posted something which could only be identified as a kind of sarcastic disillusionment (if nothing else, xp). At #154 I proposed a working definition of Information, in the language of ID, which I felt might be a suitable starting point for the context I had introduced at first (#44), although nothing like the one UB had labored over. So that pretty much ends my involvement with this thread, as it doesn’t make a whole lot of sense to try this from two separate angles. I think UB is more than up to the task of working with Elizabeth on a suitable definition of information, and certainly more capable and willing than I. But I’ll say for my own sake that, from what I can tell, I can only conclude that Elizabeth’s simulation will certainly generate CSI (no great feat, unfortunately; sorry EL) but it won’t do it blindly. It will need to assume the specification that it purports to demonstrate. This will be the result of avoiding “search” (the search for a function is a significant part of the problem here, AFAICT). With that said, I wish Elizabeth the best of luck, as I do UB, who will be, it appears, laboring to communicate a picture of the real problems, by way of an operational definition of information. A good day to all, and a good week upcoming. m.i. material.infantacy
I mentioned several days ago that I hoped to post a summary of my thoughts regarding Elizabeth’s proposal. And I’ve been struggling with it, to express myself better than I already had. I’ve kept quite a few notes; and I still may post a summary of them, but I think my part in this essentially ended at #127. So I decided instead to just post a recap of links to my (self-proclaimed) substantive comments, along with some links to others in context, in case anyone might find them interesting. Here it goes. I began my intrusion on this thread at #44, and added some substantive comments to that post in #51. Elizabeth kindly responded to #44 in #81 with several comments, amidst her involved, even heated conversations with Mung and Upright BiPed. Fearing my point had not been made, I followed up with a three part response beginning at #114 and ending at #116. Elizabeth responded briefly at #126, and I responded again briefly at #127 posing a question. At #149 I posed an addendum prompted by a comment that Mung made at #144. At #153 I posted something which could only be identified as a kind of sarcastic disillusionment (if nothing else, xp). At #154 I proposed a working definition of Information, in the language of ID, which I felt might be a suitable starting point for the context I had introduced at first (#44), although nothing like the one UB had labored over. So that pretty much ends my involvement with this thread, as it doesn’t make a whole lot of sense to try this from two separate angles. I think UB is more than up to the task of working with Elizabeth on a suitable definition of information, and certainly more capable and willing than I. But I’ll say for my own sake that, from what I can tell, I can only conclude that Elizabeth’s simulation will certainly generate CSI (no great feat, unfortunately; sorry EL) but it won’t do it blindly. It will need to assume the specification that it purports to demonstrate. This will be the result of avoiding “search” (the search for a function is a significant part of the problem here, AFAICT). With that said, I wish Elizabeth the best of luck, as I do UB, who will be, it appears, laboring to communicate a picture of the real problems, by way of an operational definition of information. A good day to all, and a good week upcoming. m.i. material.infantacy
Thanks Dr Liddle, my daughter is fine. It was just a scare (a big one). She is fine, just like her mother – healthy, accomplished, and beautiful. And today is her birthday. :)
And a happy birthday from me too! So glad to hear this. UBP - I've now read most of this thread, and I did like this from another thread:
This suggest that an immaterial representation (of the state of an object) is given material status – embedded in matter or energy – and that material representation IS the information. In other words, the state of an object is represented in a separate state of matter. A protocol can then be used to access the knowledge (of the state of the object) embedded within the representation. The distinction here is that there are two separate realities that are properly accounted for. There is the state of the object, but there is also the representation (instantiated in matter). They are not the same thing. I maintain that the state of the object is nothing more than the state of the object. The representation of that object is the information.
I do think we are nearly there. I think the final stumbling block can be overcome quite simply, as MI suggested, by giving an actual example of what would constitute a representation in my simulation. And I suggest this: That if, in my simulation, functional objects are created (functional, in the sense of helping the virtual organisms to replicate successfully) that are produced within my organisms by means of a process that has a sequence of some kind as the input and the object as the output, that would constitute the generation of information. Now, I don't claim any originality in what I am trying to do, except that I'd like it to actually work as a simulation rather than as a description, but the inspiration is something like this: http://www.youtube.com/watch?v=U6QYDdgP9eg (jump to 3'50") As you will see, the idea is that two kinds of things form - lipid vesicles and self-replicating polymers. At some point, I'd say, information,by your definition, arises (it's only an animation of course, not a simulation). So if you have the time to watch it, that might either get me to the starting block, or, alternatively, send us both back to the drawing board! But my question now at least is simple: if I could produce something that simulated Szostak's proposal here, would it, in your view, count as the generation of information from Chance and Necessity? Elizabeth Liddle
By what physical means is an E assigned it’s role as a symbol? So what is a materialist to do? Deny that 'E' is a symbol? Assert that physics and chemistry can produce symbolic relationships? And that's just stage 1. How do we then get from there to codes, coding and encoding? Mung
UB, can of worms I think, but I essentially agree that if we consider a living system to be an ARTIFACT of a mind, there's a protocol evident in the mapping between DNA and whatever protein it's translating into mRNA, embodied in RNA polymerase. There are others, certainly, but that's my pet example until I find a better one. xp Warning: pontification to follow. In this embedded specification there's a circularity (paradox) present: that RNA polymerase gives rise to itself during the replication process. It must read the code for itself from the DNA and sequence the mRNA strand for itself to be later translated into a protein. This embedded specification for an already present system is evidence for the involvement of a designer, unless chance and necessity can be vindicated. material.infantacy
MUNG: "By what physical means is an E assigned it’s role as a symbol?" MI: "by no physical means whatsoever, I believe." I think there is little doubt that there is a neural pattern in our brains that (through learning) establishes the English symbol 'e' with the "eee" sound that we make in speech. Just as there is a neural pattern in our brains that established the word a-p-p-l-e with the red fruit with the white center and the little black seeds that grows on trees. It seems there is ALWAYS a physical protocol that establishes the mapping of symbol to that which is symbolized. It also seems that this protocol is ALWAYS established in the reciever of the information, in order for the information to have an effect. This is what reason dictates, and what the evidence backs up. Dr Liddle's exercise is for that mapping to rise by nothing but physical law. Upright BiPed
Mung at #216, by no physical means whatsoever, I believe. Correct me if I’m wrong. Following one causal chain backwards, the contingent arrangement which forms the letter "E" relies upon a protocol established between the two fax machines which converts a contingent arrangement of digital “pulses” imposed upon a stream of electrons, into a printed document. Then we have a scanning protocol initiated in the first fax machine to convert monochrome contrast on a sheet of paper into the pulses that get sent along the wires. This required a protocol which converted a specification for a fax machine into a fax machine during the manufacturing process. This required an engineer capable of designing and configuring the protocols to build the fax machine, to scan the document, to convert the scan-line data into digital pulses capable of long range transmission, and convert back to a scan-line format on the other end, reproducing with acceptable fidelity, the original document to be read and understood by the receiving party. The causal chain begins with a mind. However our engineer could have been practically illiterate, because nothing that a fax machine does is reliant on the symbol for “E” -- for that, we would need to look at the causal chain that begins with the sending party and ends with the receiving party. The fax machine merely scans a document, irrespective of whatever symbols are present, encodes it into a digital scan-line format, and sends it to the receiving machine. The sending party encodes the information into symbols. The fax machine transmits the form of the symbols to the receiving machine, which prints out a copy of what it received. The contingent arrangement which forms the symbol for “E” instantiates the concept “E” into the mind of the receiving party. Sloppy post, but I’m doing this in haste. m.i. material.infantacy
So if instead of being in Los Angeles, our second fax machine is in Liverpool, where perhaps Elizabeth can get a glance at it. What is it, exactly, that determines that the ink will form the shape of an E, of all the possible shapes? Can we call it information? By what physical means is an E assigned it's role as a symbol? Mung
Happy birthday, Ms Biped! kairosfocus
Thanks Dr Liddle, my daughter is fine. It was just a scare (a big one). She is fine, just like her mother - healthy, accomplished, and beautiful. And today is her birthday. :) Upright BiPed
I'll try to get back before the 17th. Right now life is intervening (in a good way, mostly). Hope your daughter is doing better, UBP. Cheers Lizzie Elizabeth Liddle
Moderators: please don’t lock this thread!
Dr Liddle, I think this thread will close on the 17th of August, but I could be wrong. Regarding our conversation, I am glad you are taking some time. A break is reasonable, particularly if you’re already busy. It is a daunting task after all. Almost as if you are standing on the shore of an ocean, but can see the other side. You need something physical to happen over here, and have it determine something physical happening over there. And you need this coordination between the two to rise from physical law alone. Or you can turn the thing around, and look in the other direction. You can see something happening back on the first shore, and you know its suppose to determine what something happens over here on the second - but you don’t know what that something is, and you don’t even know what it was on the first shore which was supposed to tell you. There is of course a solution to this problem; it’s the same solution that exists in any other form of information transfer. The solution is a physical thing that coordinates the two sides together – a protocol – a thing to map what is happening on the second shore, to what is happening on the first. So to speak… :) Upright BiPed
... like the Ouroboros-worm. Ilion
Yes Ilion, it's an awful state -- terribly self-contradictory. material.infantacy
" They both love and despise themselves — tormented, pitiful creatures — consumed by hatred for those they disagree with, the very thing which sustains them." They consume and are consumed, simultaneously? Ilion
Sorry Mung, I have to call it like I see it. xp material.infantacy
Now that hurt! Take it back. ;) Mung
By some foul craft, pedant trolls have been crossed with stalker trolls. They can move through a blog with great speed and no shame, and they can feed off the attention they receive from those they loathe. They both love and despise themselves -- tormented, pitiful creatures -- consumed by hatred for those they disagree with, the very thing which sustains them. material.infantacy
He posted a link to Shannon's paper. http://ens.dsi.unimi.it/classici/Shannon_1948.pdf HERE It was kind of funny. He told me I should read it. He apparently didn't notice that I'd posted links to the same paper and quoted from it earlier in that same thread. So he got off to a real good start in that thread as well. Mung
Was it something I said?? Upright BiPed
....continuing with William It just seems to me that it would be intellectually troubling to stand by the claim that 'ID hasn't made a valid case' if in fact you were having to design a simulation in order to prove their case was invalid. Upright BiPed
Hey William, since you are hanging out with us, can I ask a question: Since Dr Liddle is attempting to design a simulation that may at some point in the future (if successful) falsify the semiotic argument for ID, would you agree with her that ID hasn't made a valid case? Upright BiPed
So the answer is "no" The paper you have has no credible idea (or preferably experimental results) as to how symbolic representations and physical protocols came to be coordinated together in a system of information storage and processing? Thanks for chiming in. Upright BiPed
It is always information about something. It’s effect is to change, in one way or another, the total of ‘all that is the case’ for us. This rather obvious statement is the key to the definition of information. – Donald M. MacKay, Information, Mechanism and Meaning material.infantacy
I think William stumbled on this thread while stalking Mung. I think he posted here by mistake, because he asks Mung what new insights he provided, yet there's nothing here by William but two snarky non sequiturs, while I counted 30 posts by Mung before getting tired of counting. I have to think he meant to post on another thread, because I have a hard time imagining the hubris required to level a charge of "contributing nothing" to a thread that one's contributed absolutely nothing to. material.infantacy
Upright,
how symbolic representations and physical protocols came to be coordinated together in a system of information storage and processing?
No, the paper I am referring to speaks to information only. For example, Mung notes:
It is always information about something. It’s effect is to change, in one way or another, the total of ‘all that is the case’ for us. This rather obvious statement is the key to the definition of information.
Casting information into a form that can be subject to analysis from a mathematical point of view has a long history. I'm simply pointing out to Mung that he would be better served by learning what is already known regarding information then attempting to get up to speed by asking to be spoon fed a course in information theory on a blog. However I have my own point of view regarding "symbolic representations and physical protocols" but many of the questions/points I have will no doubt be answered when your particular definition is operationalised so it can be examined in the simulation that Lizzie is proposing. So I'm going to hold off until then, when many things will become clear that are currently unclear. WilliamRoache
Was it something I said? Upright BiPed
William, I am unfamiliar with the paper you refer to. Does your paper from the 60's give any credible idea (or preferably experimental results) as to how symbolic representations and physical protocols came to be coordinated together in a system of information storage and processing? Upright BiPed
Mung,
Such insightful originality is always welcome around here. Unfortunately we can’t pay you for it.
Well, let's look at the facts. What new insight have you provided that has not already been detailed 100's of times already here and elsewhere. Do you honestly think that nobody has thought of your questions/points, that they are somehow original and insightful? If you'd read the paper I already linked to that has been available since the 60's then many of your kindergarden level questions would be answered. But you want to be spoon fed constantly. The funny thing is that it's obvious that were you to attempt to engage professional scientists in any other venue then this you'd be met with disbelief that somebody with so little understanding is attempting to critique what they don't actually understand. And then be ignored. WilliamRoache
Dr Liddle: Surely you know better than that. The problem of general skepticism is that it is self refuting by self-referential incoherence. It is not possible that "all viewpoints may turn out not to be" because there are somethings that are certainly true. One of those certainly true points is that error exists, which can only be denied by giving an instance of its correctness. Similarly, knowledge is possible is undeniably true. General skepticism is inherently absurd. Yes, since error is possible, and indeed exists, we must be humble and open-minded in our pursuit of the possibility of knowledge, here in the form warranted credibly true belief. But the very fact that we can to certainty -- not just moral certainty, but on pain of reduction to absurdity -- warrant at least one knowledge claim, then we know that knowledge even to certainty, is possible and actual. Beyond that we have weaker forms of knowledge that we routinely use in going about life and science, provisional warrant, sometimes to moral certainty [beyond reasonable doubt], sometimes to the preponderant balance of the evidence. Yet further, we have opinions which may be sufficiently well supported that it would be irresponsible -- imprudent -- not to act on them, but they are less certain yet. So, please revise your project in light of first principles of right reason and the impact of self evident truths that are the pivots for that right reason. GEM of TKI kairosfocus
An intervention: Parson, but part of the exchange pivots on the significance of symbols as digital carriers of information that may be expressed in signals and messages. Symbols have observable, measurable characteristics that are so closely associated with the information they convey that the statistical study of symbols is tantamount to a study of the associated information. This is quite similar to how energy is a very abstract entity but the study of closely associated entities and operations such as work, wave fluxes, fields, forces, motion etc in suitable configurations, is effectively a study of energy. Now, in terms of common use, digital information functions in computers, in text, in telephones etc etc. In some cases a living mind is directly involved, as in when we read or hear or speak etc. In others, the symbols, in the form of messages conveyed by signals of one form or another, are used in that form to process the symbols based on intelligently designed protocols and procedures, in machines that are intelligently designed and configured, to achieve what we commonly call information processing. But int eh latter case, we should observe carefully: it is the physical signals and their associated modulations of contingent material or energy or wave states that are being mechanically processed, much as a numerically controlled machine will mill a bit of wood to make a carving, as opposed to a skilled human carver doing much the same. That we see the machine at work does not imply that there is no intelligence involved, just that it has been canned and embedded in machinery. So, if we were to put the two carvings side by side, we would be entirely warranted to infer to design as the root cause in both cases. In the case of the living cell the situation is much like the NC machine. The key difference -- often highlighted by evolutionary materialism advocates and fellow travellers -- is that the living cell self-replicates. But, that is simply an instance of ADDITIONALITY. The living cell manifests the factory side and the self-replication side that partly uses the factory to draw on stored information that is digitally coded, and create a copy of itself, perhaps an imperfect one. Von Neumann showed us how to do that in principle, through his kinematic self-replicator, but the past 60 years have shown that this is much easier said than done, though we now have some very primitive partial instances. (Cf on RepRap a 3-d printer here, which makes about half its own parts, but of course does not assemble them into a copy of itself, which the cell does in the course of a few minutes.) What I always find astounding is that people can look at a functioning kinematic self replicator that is also a nano machine based factory, and then say that it can credibly originate by chance and necessity, and will then by lucky accidents rewarded by success in ecological niches build itself into a world of life. It strikes me that such have never done serious software and hardware system development, much less the sort of machine they are talking about. So, they have no real idea of the immensity of the specified complexity challenge they are so easily gliding over with a few smooth words. Even the first 72 ASCI characters of this post are hopelessly beyond the reasonable chance and necessity blind search capacity of the 10^57 atoms of our solar system, our effective world. [If you had a haystack 1/10 of a light year on the side and you were to make a single sample equal in size to one straw from it, would you reasonably expect to find isolated needles in the stack, or would you overwhelmingly expect to only find a straw? Of course, if you were to have an intelligently designed machine that could go needle hunting, say by putting out mm wave radar pulses and picking up echoes then converging on likely targets, that would be a very different thing. But that would be the use of a warmer/colder oracle, not a chance based random walk rewarded only by actual success, in a context where a mm's miss is as good as a mile.] GEM of TKI kairosfocus
i.e. all skepticism is valid: all viewpoints may turn out not to be :) That's the point. Elizabeth Liddle
Is it a zone where skepticism about materialism is also valid
Yes! And climate change, even. Or who won the 2004 Presidential election. The only rule is: Park your priors at the door! In other words, loosen your assumptions, whoever you are, and prepare to have them challenged. But I'm going to be anal about insisting that arguments are aimed at arguments not at people. The working assumption that people are posting in good faith must be adhered to. Hence the guano page where violating comments will remain visible but uncommentable on, and out of the way. If it doesn't work, I will close the blog (or keep it for my personal musings :) There are plenty of other forums around where people can cackle at the stupids who don't share their views. Elizabeth Liddle
Elizabeth, appreciated. Your courtesy is a credit to your character; and your willingness to entertain at least some of our viewpoints and ideas is admirable. I don't have the same view regarding many of your associates, however. I have no interest in defending ID against charges of being "creationism in a cheap tuxedo," or of ID being "unscientific." I've no interest in engaging with people who are incapable of respecting those of my persuasion. I find repeated assertions that mind is an emergent property of matter a bore, as I find those who are incapable of even assuming for the sake of argument that the mind is, even possibly, in a category of it's own. And I have no tolerance nor respect for priggish behavior, classism, elitism, or for bullying, stereotyping, caricaturing, and all the tactics I've witnessed here numerous times by the same folk who whine incessantly about UD's moderation policies. I find all of that very tiring and I have no intention of investing any effort on those who will steadfastly refuse, to the bitter end, to consider the validity of making a design inference because they can't, even for a moment, accept the implications that might follow. I could go on but I think you get the point. I'm finding it rewarding to explore ideas here with people with whom I have something in common philosophically, having spent enough of my youth having philosophical materialism rammed down my throat to the point of gagging. There is no philosophical parity between materialism and theism, so there is little common ground to be had there, and so no point in trying to convince of anything, those who've stopped up their ears to anything I have to say before I've ever said it. These are only interested in scoring rhetorical victories before high-fiving their buddies and proclaiming "pwnd!!!11!!" I'm not in a war of world views on this blog. I can converse with those who consider what I say, and who extend the benefit of the doubt when I say something stupid or get in over my head. I can for the most part expect respectful correction, and sincere attempts to help me expand my understanding by sharing insights and trying to enlighten me on facts or viewpoints that I may not have considered. Can I get the same at "The Skeptical Zone?" Who knows, but I'd bet not. Is it a zone where skepticism about materialism is also valid, in any sense, instead of being written of as religious fanaticism, or "ignorance of the facts?" Because what passes for intellectual pursuits among committed materialists seems like little more than elitism, dismissive of even the notion of a divine intelligence to the point of being downright hostile to, and demeaning toward, anyone who disagrees. Well that's my evening rant, I hope it was at least entertaining! xp g'night, m.i. material.infantacy
This thread is getting really interesting, and I'm finding the last few posts very useful. Moderators: please don't lock this thread! However, if we get stuck again, I've book marked this one, and I strongly suggest that we adjourn to The Skeptical Zone The Skeptical Zone and set up permanent camp! I've started a thread already, but can make a new one with a link to here. Or if anyone wants to write their OP, just let me know. I'm making all users "contributors" with OP writing permissions. Also I have a guano zone, MI, and I'm not afraid to use it :) Elizabeth Liddle
I guess I'm looking for the appropriate nomenclature relative to two distinct entities: the concept conceived in the mind, and the symbols representative of the concept. material.infantacy
fG, That's true in a sense, except we're understanding a delineation between that process and it's representation in the mind. I gather it's your view that there is no delineation, and the conscious process is a moment-by-moment change in "brain state," which corresponds directly to elector-chemical states in the brain. If you're right about that, then you're absolutely right in your #185. But assuming for a moment that the process in the brain is the result of mind, then we have another category of thing which requires some sort of explanation. In this case, the concept formulated in the mind is of an entirely different category than the physical states of the mediums which carry the symbols. If I'm starting to understand Mung after my inanity in post #183, the concept suspended in the mind is information and the symbols strung together in whatever medium, which carry the message, contain no information. If that's the case, then the message is an artifact of the mind, having no impact on anything in the universe whatsoever, having no quality distinct in effect from any other contingent arrangement of the same. Information is instantiated in the mind, and the impact of the message on another mind is another "instantiation" of the message, either creating, or simply conveying the information. This is where Mung comes in and says, "No, you're not getting what I'm saying at all." If I am, information can only exist and be contained in a mind, and any physical state used to represent the information is a string of symbols producing no effect except in another mind. If that is the case there's another type of artifact to catagorize, but I'll save that for a bit. material.infantacy
“When a personal assistant in New York types a dictation and then prints and sends the result via fax to Los Angeles, some thing will arrive in L.A. But that thing — the paper coming out of the fax machine — did not originate in New York. Only the information on the paper came from New York. No single physical substance — not the air that carried the boss’s words to the dictaphone, or the recording tape in the tiny machine, or the paper that entered the fax in New York, or the ink on the paper coming out of the fax in Los Angeles — traveled all the way from sender to receiver. Yet something did.” ------- This question goes away the moment one realises that information is not a thing, but a process, a chain of interactions. Nothing travelled all the way from New York to Los Angeles. What happened in Los Angeles was just the final step in a series of interactions. fG faded_Glory
Correction. Old: ...can we say that it received a message representing the sequence of symbols? New: ...can we say that it received a message represented by the sequence of symbols? material.infantacy
Mung, regarding #180, "So one thing that UPB, yourself, and Elizabeth all seem to hold in common is that a receiver does not have to be a mind." I need to give this some thought. I typed a bunch of stuff but it was all rubbish, so I flushed it. What follows may not be much of an improvement. As a side note, I would say of Meyer that he may have made a conscious decision to trade precision for accessibility, avoiding carefully crafted definitions for commonly used words, for the sake of a more general audience. ... "But is that actually the case? Because he also seems to acknowledge that it is the fax machine (or perhaps the paper itself) that was the receiver of information." I'll confess to being ignorant of the distinction, but I'm intrigued and would certainly like to know more about how you see this. Can you recommend some reading material? I'm happy to do my homework on this. Does it change things if we say that the fax contained a "message" instead of containing information? Because I would suggest that they are just splotches of ink on a paper, and the particular configuration isn't significant to any entity save a mind, to which it is potentially profound. (But for what we observe in a cell, there's certainly an artifact of mind crafted in the languages of chemistry and physics. Is it information?) So if you're saying that information can only be instantiated and received by a mind (and I agree in principal, so this is about a definition, not a concept) and that's part of the definition of information, then how do we refer to the coded symbols, the contingent arrangements, which carry it? If the fax doesn't receive the information, can we say that it received a message representing the sequence of symbols? That makes sense to me I suppose, but I'm afraid I might be missing the point. I believe it's reasonable to presume that there is an artifact we observe which can only be the product of mind. About that we certainly agree. What I'm asking is if I'm abusing the language by using the word information where it would be more approprate to use a word like message. And I agree that it's important for us to be consistent about our definitions and usage, as much as possible. So I hope you don't read all this rambling as quibbling. I want to understand these distinctions better, and the concepts relating to them. "But they do need to be symbols, right? They need to represent something." Absolutely. So I'll ask a couple of question to seek clarification, and make a few assumptions. I come across a message written on a piece of paper, which reads, "Help! I need a roast beef sandwich and a six pack of Sierra Nevada Pale Ale delivered immediately to 1234 22nd street #212 or I will surely die!" The message has clearly been written by someone and now read and understood by me. Where's the information now? I write a program which takes a gray scale bitmap containing elevation data and outputs a 3D rendered terrain grid on the computer screen. I send it to a client who runs the program and examines the output. Where's the information in this example? Is it in the program (bits in ram or hard drive), the source data, the output, or just in the minds of the sending and receiving party? Again, recommended reading would be appreciated, especially if you want to escape my inane questions. xp m.i. P.S. I'll read #182 now. material.infantacy
In everyday language we say we have received information, when we know something now that we did not know before. If we are exceptionally honest, or a philosopher, we assert only that we now believe something to be the case which we did not previously believe to be the case. Information makes a difference to what we believe to be the case. It is always information about something. It’s effect is to change, in one way or another, the total of ‘all that is the case’ for us. This rather obvious statement is the key to the definition of information. For those to whom 'metaphysics' is a bad word, any aura of metaphysical abstruseness which it may have is easily exorcised. What we know or believe, in science at least, could in principle be represented in a variety of quite precise ways: we might make a long statement, or draw a symbolic picture, or make a physical model, or send a communication-signal. All the results could in a sense show or embody what we believe: they are what we may call representations: structures which have at least some abstract features in common with something else that they purport to represent. These abstract features of representations are what we want to isolate. They form the real currency of scientific intercourse, which is normally obscured in wrappings of adventitious detail. Now that we have established this fundamental notion of a representation, information can be described as what we depend on for making statements or other representations. More precisely, we may define information in general as that which justifies representational activity. – Donald M. MacKay, Information, Mechanism and Meaning Mung
WilliamRoache:
No, not really.
Such insightful originality is always welcome around here. Unfortunately we can't pay you for it. Mung
m.i. I understand what you are saying and I think Meyer is imprecise in his language and it is just this sort of imprecision which leads to so much confusion, even among ID advocates, a bad place for confusion to exist about information! There is no information on the paper. What is on the paper is ink/toner. :) What is it that makes those splotches of ink anything other than splotches of ink on paper? The process that puts the ink on the paper is mechanical. The ink is given form. Mechanically. Discrete symbols appear. The symbols "stand for" something, they "represent" something.
A symbol is something which represents an idea, a physical entity or a process but is distinct from it. The purpose of a symbol is to communicate meaning. http://en.wikipedia.org/wiki/Symbol
The symbols are arranged in a sequence. Again through a mechanical process they are imprinted on the page by forming ink. The arrangement of symbols itself gives form to a representation. That which represents another. It is these representations that we look for to tell us that information is present (among other things). As an interesting point for possible further discussion, Meyer's view seems to be that the information needs to be on the paper because someone needs to then read the fax in order for there have to been a transfer of information. But is that actually the case? Because he also seems to acknowledge that it is the fax machine (or perhaps the paper itself) that was the receiver of information. So one thing that UPB, yourself, and Elizabeth all seem to hold in common is that a receiver does not have to be a mind. And we can even talk about an effect, in that the effect is the arrangement of symbols formed by applying ink to a sheet of paper in Los Angeles. But they do need to be symbols, right? They need to represent something. Do formless blobs of ink convey information? Maybe they convey that the fax machine is broken, lol. Mung
Mung at #175, Meyer says something that struck me as similar. From page 15 of SITC, Meyer suggests to the reader that information is in another category from the physical: "When a personal assistant in New York types a dictation and then prints and sends the result via fax to Los Angeles, some thing will arrive in L.A. But that thing -- the paper coming out of the fax machine -- did not originate in New York. Only the information on the paper came from New York. No single physical substance -- not the air that carried the boss's words to the dictaphone, or the recording tape in the tiny machine, or the paper that entered the fax in New York, or the ink on the paper coming out of the fax in Los Angeles -- traveled all the way from sender to receiver. Yet something did." material.infantacy
Good show at #163, faded_Glory. "Information" is always in the eye of the beholder, just like "beauty." Pedant
Hi UB, regarding your post at #177: Don't misunderstand, you seem more than capable of the task -- your grasp on the problem is impressive. I just wouldn't think it fair to ask you to shoulder the entire burden of providing (or better yet, representing) a comprehensive definition of information for a simulation that will probably be used to justify who-knows-what claims about the efficacy of chance and necessity to do it's own designing. I wouldn't have the stones to do it. And yes, I wondered myself about the eerie silence. I expected to be corrected at some point for my own assertions regarding what would "prove" that chance and necessity can stand in place of a designer. Perhaps we're the only ones who find it particularly interesting. It does seem almost axiomatic that a simulation is incapable of solving the search-for-a-function problem (squared) so maybe they see it as an exercise in hot air, no offense intended to Elizabeth. I hope to summarize my own view on the problems and potential solutions in the next day or so, and then I intend to head back to the bleachers and watch the game, with perhaps the occasional yelling from the sidelines. material.infantacy
MI at 174. Obviously I cannot speak for the ID community. I am just me - trying to articulate an argument coming from the observations. And if you take a look around, you'll notice that (with just a couple of valued exceptions) the UD contributors have remained fairly silent on these threads, so that should answer that. Upright BiPed
Mung,
Just wondering if pursuing this line of thought would be helpful.
No, not really. WilliamRoache
So in the search for information, what are some examples from real life which we can all agree on, if any? Can we take the contents of a file on a hard drive as an example? Let's say that I prepared this post in a text editor and saved it to a file on my hard drive. I then open the file in my hard drive and copy/paste the contents and submit them. Or I open an email application and attach the file to the email and send it to someone. So what is the process that is involved and how is the information "stored" and communicated? When I press a key on the keyboard mechanics unquestionably take over, agreed? Something gets stored in RAM, something gets displayed on the screen, and when I save the file something gets stored on the hard drive which can later be retrieved, displayed, and communicated. No one would argue that this process of from key to hard disk is not entirely mechanical, would they? So what actually happens during that process and can it help to understand that in our question for how to discover the presence of information? Or perhaps introducing a hard disk is just adding an unnecessary extra component. Hard drive encoding is a whole topic in itself, lol. So what about RAM? How do we get from keyboard to RAM and what is it that gets stored in memory? Now first and foremost RAM is a physical substrate. What is it about RAM that allows it to store representations of information? Just wondering if pursuing this line of thought would be helpful. Mung
Ditto to Mung's #172. But I would recommend that at least Meyer and the EIL be made aware of the goal and the terms as part of the arrangement, to be given an opportunity for comment. It shouldn't be hung on UB to speak for the entire ID community, unless he prefers it that way. material.infantacy
On the non-circularity of Information Charges of circularity have been leveled, but they are misguided. Elizabeth believes that Intelligence can arise from Chance + Necessity sans Intelligent causation. So even if it is true that information requires intelligence, nothing prevents the proposed simulation from incorporating the generation of intelligence from Chance + Necessity in order to bring about information. In fact, that would be an even more impressive demonstration. Away with the misguided objections! On with the simulation! If it must first generate intelligence, so be it. Mung
And for the record, if the two of you (Upright BiPed and Elizabeth Liddle) come to some agreement, I have no intention of stepping in and objecting. As far as I am concerned the actual decision as to what is a sufficient demonstration for the purposes of the discussion between you is between the two of you. Mung
UBP: Touched and reassured by 169. Thank you, both for this and for your patience :) Will look in detail at 170 before responding. Cheers Lizzie Elizabeth Liddle
Dr Liddle, It seems like an utter waste of time to have to go through the sideshow, but you’ve made several comments. You continue to cast my argument as “circular”. What on earth are you talking about? From the very start I have said that the presence of information requires a mechanism in order to bring it into being. I’ve never described information as solely the product of a mind, and therefore do not make the circular argument that the product of a mind is defined as the product of a mind. In fact, this is something you yourself acknowledged in a comment to another contributor to the conversation (Liddle: ‘Upright BiPed has already said that a mind is not necessary’). So what gives? One instance where you started calling the argument circular is when I brought Nirenberg into the conversation in full. I told you at the time that simply recognizing the historical fact that Nirenberg confirmed the existence of information in the genome (by demonstrating it) did not constitute a circular argument. Instead of stating why you thought it was, you only wondered why I didn’t see it. You also made the circularity claim when I observed that the physical state within nucleotides never interact with the physical state of the amino acids, yet one determines the other through an discrete physical object; a protocol. You’ve made these claims along the way, but you have yet to actually demonstrate what is circular. Either do so, or leave it alone. - - - - - - - - - - - - - - - Also you seem to want to portray yourself in this conversation as having provided a definition that fits the observations, and are just waiting and waiting and waiting for someone to come along and give it the “okay”. This is either lunacy, carelessness, or deceptive, and it can easily be shown as such. Look at the very last definition you offered, versus the first. Observe the dates. Notice the incremental changes that took place along the way. This, Dr Liddle, has been an exercise in getting you to understand and include (one by one) the critical details required to confirm the true existence of information. Here is an example from late in the conversation (just one week ago as of today), I am arguing that to confirm information it must be demonstrated, and you are agreeing to a demonstration, except that you don’t feel we need to include the language (or the dynamics) of ‘representations’ and ‘protocols’:
BIPED: This is why the demonstration is the only viable method, Dr Liddle. It’s mandatory to the exercise. You have to demonstrate that a discrete representation is a discrete representation, and for it to actually be a discrete representation it will require something to create the mapping between itself and that which it is to represent. And as in all other cases of information known to exist, those representations will have a physical protocol to establish that mapping. LIDDLE: Well, I would then need operationalisations of “discrete representation” and “physical protocol” then. But I don’t see that they are necessary. [emp ad] If the output (say an amino acid) maps to an arrangement of nucleotides, then obviously there must be a physical mechanism to do so. But all we need to do to say that information has been created is to demonstrate the mapping, surely? We do not have to say – “oh, and that mapping has to have been created by a physical protocol”. And we certainly do not have to say, surely, that the mapping has to have been created by a non-physical protocol, or a “break in the causal chain”!
You say that all we need to do is show the mapping between the arrangement and the effect. This is effectively the same as saying all we need to do is show that an arrangement of oxygen and iron are mapped to the presence of rust! Without a “break in the causal chain” and a “discrete protocol” to establish the relationship across that break, there CANNOT BE A REPRESENTATION, and hence, NO INFORMATION. So in the last week of July when you were still saying that “Operationally, we would demonstrate that information was present by the simple observation that the arrangement of input material resulted in specific output” you are indeed hanging on to a misunderstanding of the dynamics involved, and you are hanging on to this misunderstanding at a very fundamental level. Not only are you hanging on to it, but you feel the need to clarify it for me, as if I need a more precise understanding. You are shoehorning this misunderstanding (and others) into your demonstration amongst a cloud of smoke, and you and want me to sign off on it. You say:
I’ll tell you again, and try to make it even clearer … My “arrangements of something” will be strings of “virtual polymers”. I will categorise them by their arrangements, as they will be categorical variables. I will measure their “specific effects” either as longevity (in terms of iterations) or reproductive fidelity (by comparison to “organisms” with different parentage. If the arrangement is a statistically significant predictor of “specific [functional] effects” I will consider my claim demonstrated.
Dr Liddle, there is nothing whatsoever in that methodology that establishes either a protocol or a break in the causal chain, and therefore no representations, and then by definition, no information. Yet you are willing to consider the presence of information “demonstrated” on your behalf.
My claim stands: that Chance and Necessity alone can create information, by any definition of information that anyone cares to offer
Dr Liddle, really, get real. - - - - - - - - - - - - - - - - - - Okay, enough of that. Let’s look at the definitions we’ve exchanged. First let’s remember the conceptual definition which came from the observations of information (any information):
Information is a representation of a discrete object/thing embedded in an arrangement of matter or energy, where the object/thing represented is entirely dissociated from the representation, but where the association of the two can be established by means of a protocol instantiated in the receiver of the information.
And now the last exchanges:
LIDDLE: That, starting only with non-self-replicating entities with a physics-and-chemistry plus random kinetics, self-replicating “virtual organisms” can emerge that contain patterns of “virtual matter” whose arrangement determines the fidelity of its self replication (measured in terms of similarity to its “parent” as compared with a randomly substituted pattern).
I then told you of the gulf that stands between this definition and the observations that led up to it, and I offered this instead:
BIPED: Starting only with non-self-replicating entities in a physics-and-chemistry (plus random kinetics) environment, self-replicating “virtual organisms” can emerge that contain dissociated representations embedded in the arrangement(s) of matter. These arrangements represent the system that created them, and will determine the output of that system by means of an intermediary “virtual object”. Without becoming incorporated, this object may interact with either the representation or output, or both, but where the two remain whole and physically separated. 1) Dissociated = having no physical relationship to that which it represents. 2) Intermediary = serves the dynamic purpose of allowing the input representations to determine the output while they each remain discrete, a facilitator
To which you found the word “representation” objectionable, as well as the footnotes (added for clarification of context) and offered this in return:
Starting only with non-self-replicating entities in a physics-and-chemistry (plus random kinetics) environment, self-replicating “virtual organisms” can emerge that contain arrangements of virtual matter represented as strings that cause the virtual organism to self-replicate with fidelity, and thus determine the output of that system, namely a copy of that system. The arrangement must produce its output by means of an intermediary “virtual object”. This “virtual object” must take the form of a second arrangement of “virtual matter” that may interact with the strings and with some other “virtual object” that affect the fidelity of the self-replication of the “virtual organisms” without either permanently altering, or being altered by, the interaction.
In deference to your many attempts to integrate the requirements of information into your simulation, I suggest that we work from this definition of yours, but I take a couple of exceptions. My priorities have always been to include the existence of specific objects (discrete representations and protocols) and specific dynamics (the discrete-ness of the objects, the break in the causal chain, and the resulting effect - which is the output of the system being the replication of the system itself). As far as your first paragraph above, I have no particular problems. What this tells me is that you will produce a system that is reproducing copies of itself by means of “arrangements of virtual matter represented as strings” which determine the output of the system – a copy of itself. In this paragraph you used the word “represented” which you otherwise objected to when I used it, but I will not throw up a fuss. No one interested in this exercise will misunderstand its use, and if they do, it won’t change the results of your simulation. Therefore, this paragraph establishes my priorities of representations and output, but says nothing of protocols or the break in the causal chain. For this we turn to your second paragraph. In your second paragraph you say:
The arrangement must produce its output by means of an intermediary “virtual object”. This “virtual object” must take the form of a second arrangement of “virtual matter” that may interact with the strings and with some other “virtual object” that affect the fidelity of the self-replication of the “virtual organisms” without either permanently altering, or being altered by, the interaction.
This paragraph is made up of two sentences. The first of these introduces the protocol as an intermediary virtual object, which is perfectly fine, and leaves only the critical dynamic relationship to be established. This is where I take my exception, and I suggest that we leave the other parameters alone, and focus on dealing with this final piece. The observations regarding the dynamics of the protocol are critical. Without them, nothing else can follow, or in the case of the representation – it couldn’t even exist.. Now, we have already had an extended (and bloody) brawl over the term “break in the causal chain” and over the specific dynamic involved. That brawl can be condensed down to the general observation, and one of your final comments on the matter:
BIPED:Thirdly, to facilitate this dynamic property, there must be a necessary break in the causal chain. This break is exemplified within the cell by the simple fact that proteins are not created from nucleotides. In other words, if you plucked the ribosome from the cell’s protein synthesis machinery, and put yourself in its place, in one direction you would see sequences of nucleotides coming in for translation, and in the other direction you would see sequenced amino acids floating off into the distance to be folded into proteins. One of these marks the input of information (representations instantiated in matter) and the other is the output (a process being dynamically altered by the input). But these are two entirely separate causal chains (if I may use that word). The first causal chain is the sequence of representations, which I say is the product of design, and you contend is the result of chance/necessity. It is made up of nucleic acids. The second causal chain is the bonding within the resulting polypeptide. It is made up of amino acids. The amino acids and the nucleic acids do not interact. They are connected at this dynamic break only by the protocol itself, which I say is the product of design, and you say is the result of chance/necessity. Regardless of who is correct, this dynamic break in the causal chain must be represented in the simulation.
- - - - -
BIPED:ID views these symbols and their discrete protocols as formal, abstract, and with their origins associated only with the living kingdom (never with the remaining inanimate world). Their very presence reflects a break in the causal chain, where on one side is pure physicality (chance contingency + physical law) and on the other side is formalism (choice contingency + physical law). Your simulation should be an attempt to cause the rise of symbols and their discrete protocols (two of the fundamental requirements of recorded information between a sender and a receiver) from a source of nothing more than chance contingency and physical law.
- - - - -
BIPED:Again, it is not at my insistence that the entailment be simulated, it’s a requirement coming from the evidence itself – but there is no miracle there. The tRNA – a physical object subject to physical law – is the protocol that (by its physical configuration) allows the information to be transferred into the output, and thereby constraining it. If there is an unbroken line between the information and its final effect, then no discrete representation could exist, and no protocol either. Neither would even be necessary. This would violate your own operational definition, as well as the dynamic structure that the definition entails.
- - - - -
BIPED:The sequences of nucleotides in DNA, and the order of amino acids in proteins, are two discrete objects. They are separated by both space and direct interaction. They are bridged by transcription and translation machinery which includes a physical object which converts one sequence into the other while they remain separate. It responds to the representation at the input, and transfers that representation to a second sequence which is entirely disassociated from the first. This fulfills the operational definition you put forth.
- - - - -
LIDDLE: Re-reading your paragraph here: ”The sequences of nucleotides in DNA, and the order of amino acids in proteins, are two discrete objects. They are separated by both space and direct interaction. They are bridged by transcription and translation machinery which includes a physical object which converts one sequence into the other while they remain separate. It responds to the representation at the input, and transfers that representation to a second sequence which is entirely disassociated from the first. This fulfills the operational definition you put forth.” This is absolutely fine. As I said I’d thought we were nearly there. But here you specifically describe the link (the dissociated link) as “a physical object which converts one sequence into the other while they remain separate” No problem. I’m not quite sure why you describe this as a “break in the physical chain” or a “break in the causal chain”, because it seems to me to be neither. But if that “break” can take the form of a “physical object” there is no problem at all.
Here you introduced the phrase “dissociated link” which I thought had some promise. We can either work from there, or start anew. As long as we can fully capture the dynamics involved, (and perhaps take a moment to reflect on entire resulting definition) I will be prepared (obviously speaking only for myself) to stand by that definition for the purposes of your simulation. Upright BiPed
To all following the Upright BiPed vs Lizzie cage fight
A quick side note… I have failed to openly acknowledge something, which I should now correct. The conversation between Dr Liddle and myself regarding the rise of information has now gone on for several weeks, and given the very competitive nature of the conversation, it has been mostly cordial (with an occasional splatter of blood here and there). My own belief is that the only reason this conversation has taken place at all is because Elizabeth Liddle allowed it to happen. One of the main problems in the exchanges which typically take place on UD is that the opponents of ID rush headlong into obfuscationville at the first sign of anything interesting. It is a simple fact that many, many ID critics run for the tall grass at the first sign that ID might (egads!) make a valid point in their presence. Others launch their entire attack from the safety of the tall grass. And if not doing this, as Dr Liddle has not done, is a sign of courage, then I would say that Dr Liddle is about as courageous as any ID critic I’ve seen on UD in some time - certainly she is magnitudes more courageous than her sister-in-arms Patti, the internet gender-bender known as Mathgrrl who couldn’t wait to get into the weeds (and refused to come out). Anyone who has read Patti unplugged will recognize that she is too eaten up with hatred to allow a viable conversation of any kind to take place, and is therefore as useless to me as she is to herself. Even though I have repeatedly challenged Dr Liddle from the very start, and even though I know (from her own words) she has been completely confused by the argument before her, and even though I will continue to argue with her over what remains in her misunderstandings – there is no doubt that I owe this entire conversation to her, and I am grateful to her for it. I will certainly regret it if she now runs this into the dirt over what she sees as a procedural violation – the upsetting possibility that some educated person might not know what a “representation” is in the context of “information”. Upright BiPed
Sorry for the delay...a post or two coming up now... Upright BiPed
MI, I have few quibbles with your last post, except to flag that the concept of 'mind' is of course a thorny one that will need further unpacking to avoid talking at cross-purposes. Reason I chipped in was just to point out that your definition I quoted was badly stacking the deck against Lizzie's simulation, in effect defining the possibility of success out of existence. I'm sure there are other ways of looking at the concept of information that will allow for a meaningful experiment, and hopefully Lizzie and people here can agree on one of them to take this very interesting project forward. Over and out. fG faded_Glory
MI, lots of good suggestions there. In fact, I was thinking of writing to Meyer. Elizabeth Liddle
*than trivial* ... ... in the last paragraph. Sorry. material.infantacy
fG, Fair enough, and thanks for assuming a stance which gives the scope of propositions fair breadth. Without invoking theology it would be difficult to contradict the notion that there are elements of nature for which design need not be pleaded, in my estimation. We would like to focus on those for which it can; namely, living organisms. Personally I'm open to one of two propositions being true: that either living organisms are perfectly capable of being produced by chance and necessity within the time frame required (that is, explicable in terms of the laws of physics) or that living organisms are only explicable in terms of a prerequisite intelligence: that what we observe, specifically in living systems, is corollary to the presence of mind. With that on the table, I suggest that parsing some ID proponent's definition of information (on a discussion board, no less) couldn't be considered much more significant than scoring a rhetorical victory against the ID movement. I'm making the presumption that Elizabeth's simulation should accomplish more than that, to anyone's satisfaction, if the claims are to match the accomplishment, because programming a simulation in which the output would measure positive for the presence of CSI is little more that trivial. Best, m.i. material.infantacy
I actually have some symphathy for the idea that information requires involvement of a mind. I think though that this is the case on the receiver side, not necessarily on the source side. Take an outcrop of sedimentary rocks. These strata were deposited by physical and chemical forces plus a dose of contingency. I doubt anyone sees a need to invoke a mind for the deposition of sediments? To a geologist these rocks contain a lot of information. They allow him to reconstruct the ancient depositional processes and environment. But if nobody is looking at it, it is just a pile of rocks without conveying much information at all. DNA may be in the same place - it takes a human mind to identify the information in it, but that doesn't mean there was a mind involved in the generation of it. And I am not yet convinced that the material interactions in the cell really are a form of ínformation when nobody is looking at them. fG faded_Glory
Person 2: (throws herself onto the tracks) Mung
Representations "Information is as information does" - such is the watchword of the operational theory of information. Where it selects or constructs tangible representations, information is easy to measure, at least in principle.
General information theory is concerned with the problem of measuring changes in knowledge. Its key is the fact that we can represent what we know by means of pictures, logical statements, symbolic models, or what you will. When we receive information, it causes a change in the symbolic picture, or representation, that we could use to depict what we know. We shall want to keep in mind this notion of a representation, which is a crucial one. Indeed, the subject matter of general information theory could be said to be the making of representations - the different ways in which representations can be produced, and the numerics both of the production processes and of the representations themselves. By throwing our spotlight on this representational activity, we find ourselves able to formulate definitions of the central notions of information theory which are operational, with more resultant advantages than just current respectability. In any question or debate about "amount of information", we have simply to ask, "What representational activity are we talking about, and what numerical parameter is in question?" and we eliminate most of the ground for altercations, or we ought to do so if we are careful enough! - Donald M. MacKay, Information, Mechanism and Meaning
Mung
All the characteristics of a train includes that the vehile should be running on rails. If you can't come close emough to establish that, it might just be that you're actually looking at a trolleybus. So, your presumption would be hasty and wrong. fG faded_Glory
We can never presume that a train is a train, unless we can prove it’s running on tracks -- even though it looks exactly like every train anyone has ever seen; and we have never seen a thing which has all the characteristics of a train, but is something altogether different. The probability is not zero that it's actually a billy goat, therefore it’s a billy goat. material.infantacy
Person 1: trains run on tracks. Hey look, over there, a train! Person 2: Are you sure it's a train? We can't see if it runs on tracks or not. Person 1: Of course it is a train. It looks exactly like all other trains I have seen. Therefore, it surely must run on tracks. Yes, it is a train! Person 2: Sorry, but until you show that it runs on tracks you can't be sure it is a train. Person 1: But don't you see, it must run on tracks, how else could it be a train? Person 2: (throws herself under the vehicle) fG faded_Glory
That said, we are offering ways for Lizzie to demonstrate the presence of information that we can agree would establish that Chance + Necessity sans intelligence can do what she claims. These do not require a definition of information. For example, if she can demonstrate the existence of representations, protocols, and effects in her simulation such that these three items arose from Chance + Necessity sans intelligence. Now we need to see if we can make progress. Mung
Mung, I don't know how you do it. #155 illustrates where this is at. Nicely done. material.infantacy
Person 1: Trains run on tracks. Person 2: No, I can create a train that does not run on tracks. Person 1: That's not logically possible. Trains, by definition, run on tracks. Person 2: I don't care. If I can demonstrate the existence of a train that does not run on tracks your claim will be falsified. Person 1: Have at it. Person 2. First we need a definition of a train. Person 1: A vehicle that runs on tracks. Person 2: No, that would be circular! Person 1: Not my problem. That's what a train is. Person 3: Person 2 is correct. If you don't let Person 2 redefine what a train is, there is no way Person 2 can create a train sans tracks. Person 1: Sometimes life is just cruel that way. Reality bites. Mung
I'll toss this out for comment: Information: the presence of *specified complexity* in a system exhibiting the *irreducibly complex* integration of independently designated parts, each of which are constructed at a bit depth in excess of 500. Specified Complexity: the specification for a functional system which has only an abstract association with the product it specifies, limited to acting as a template or archetype, from which the product it represents is instantiated by way of an intermediary. Irreducible Complexity: the requirement that in a functional system of integrated components, the removal of any single component causes the system to become non-functional. material.infantacy
How about this one: information is a contingent arrangement of matter that imposes the illusion of having been specified for a purpose. Or this one: information is any contingent arrangement of matter for which a meaning can be arbitrarily assumed. Or this one: information is any contingent arrangement of matter for which function can be arbitrarily defined. Or how about: information is an emergent property of mind, which is an emergent property of life which is an emergent property of matter which is an emergent property of gigantic freaking explosions in the midst of absolute nothingness which produce space, time, and the laws of physics. This train's pulling up to the station at post #127, and it looks like that's where my ride ends. I strongly suggest Elizabeth gets her "definition" from the likes of Stephen Meyer, and offer to submit her simulation source code to EIL so that they can determine if she smuggled in specification or failed to meet the specified complexity threshold for applying the CSI metric. I got a headache. material.infantacy
To clarify, my last sentence above should read: And clearly you can’t point to the information in the DNA itself as the proof that intelligence is involved, that would be hopelessly circular! faded_Glory
"If ID proponents define information as that which is the product of intelligence, and if that’s troubling to you, it seems to me that the refutation is conceptually simple: demonstrate that what we call “information,” can be generated blindly... " ----- This is quite impossible. Whatever it is that Lizzie's program will produce, it will never be Information in the sense you defined it, by definition! The moment you define information as requiring intelligence, nothing in the world that doesn't use intelligence will ever be able to produce Information. One cannot prove definitions, one can only agree or disagree on them. What one can prove, or disprove, is if something fits a particular definition or not. You are free to define Information as requiring intelligence, if you so want. However, in that case the task is to establish that a study object actually conforms to your definition. Since the definition of Information you use requires intelligence, you will actually need to demonstrate this intelligence before you are justified in pronouncing the presence of Information! To establish if a study object has a particular property, you need to demonstrate that is satisfies ALL that is included in the definition of that property, not just SOME of it. If you define a train as a vehicle that runs on rails, you need to establish that an object is a vehicle AND that it runs on rails before you are justified to call it a train. If you define Information as, say, something contained in a complex and specified arrangement of matter that requires intelligence for its origin, you need to establish that an object is a complex and specified arrangement of matter, AND that it took intelligence to generate it, before you are justified to say that it contains Information. You can't leave off that last part and still make your claim! If ID defines information as requiring intelligence, it needs to identify intelligence involved in the generation of DNA, before it is justified to call DNA information. And clearly you can't point to the DNA itself as the proof that intelligence is involved, that would be hopelessly circular! fG faded_Glory
OK, got a big day today, catch you guys later. I scent progress in the air. Elizabeth Liddle
Mung said, “So I’m concentrating more on trying to say what is common to such systems.” I’m revisiting this because I think it’s important. I suggest that DNA, RNA polymerase, and DNA polymerase are the most ubiquitous, tightly coupled, and interdependent triplet of elements we could possibly consider for a simplified self-replicating prototype model that could exhibit the unequivocal presence of specified complexity, if generated blindly by way of a fair simulation. DNA polymerase can be thought of as a black box representing the entire chain of events beginning with the traversal of the complete DNA strand, and ending with the successful, faithful reproduction of the complete, functional prototype. Likewise, RNA polymerase can represent a black box which takes as input a section of the DNA strand, and outputs a sequenced, folded, functional protein. The chain of events that begin with traversal of the strand and end with the folded protein can be implicit, that is, assumed. These three, taken together, should reasonably constitute a functionally integrated, self-replicating virtual organism capable of demonstrating that specified complexity has been achieved via blind evolution, assuming appropriate metrics are established for sequence length and character set, and so on. It should also be noted that, I’m a little drunk right now; so I suggest that the contents of this post be disregarded, if it can be demonstrated that what I’m saying at the moment is completely absurd. mi material.infantacy
Lol! I'll take two of those, please. material.infantacy
We need an error-correcting enzyme! Mung
Correction, to the above, #145. I don't have any idea why I keep typing transcriptase instead of polymerase. I've had to correct myself about a dozen times, but this time one got through. So anywhere you might encounter the fictional "RNA transcriptase" enzyme in anything I write, know that I intended to reference the quite legitimate, RNA polymerase. material.infantacy
Mung,
"First time through I just read them for what they are without trying to do any deeper analysis."
Lol, don't count on there being anything deep enough to keep you busy for long. xp
"But it seems that you and Elizabeth have in mind a similar approach, which is to define a concrete system which if achieved can be said to demonstrate the presence of information. Is that fair?"
I'm not sure that Elizabeth has dealt with my proposal yet, from my point of view. We had some pleasant discussion for a spell; but I'm suggesting that her inert tRNA intermediary is unnecessary and that she needs to use an RNA polymerase analog, which I believe exposes a paradox. This is the simplest form of the problem that I can devise, which I believe is satisfying to establish the presence of information. I'd certainly appreciate if you or Upright BiPed or anyone else could expose any inadequacies or fill in details, unless it's just plain flawed. I'm still hoping Elizabeth answer my question in #127, the diagram being described by me in #116, which I thought might advance the conversation. From what I can tell, Elizabeth is leaning toward a more concrete target, but exactly what she intends to model I couldn't say, she may not have decided yet. I'm proposing that the DNA -> RNA Transcriptase -> Sequenced protein would be adequate with the caveats as listed in #116. I'm willing to expand on the proposition, but I didn't want to bother unless I got a nibble.
"My complaint about Elizabeth’s operational hypothesis is that it could not be extended to any other system which we know to be indicative of information."
I had the same concern. Unless we use an analog of a living system, success or failure will depend upon one's ability to parse a definition.
"So I’m concentrating more on trying to say what is common to such systems."
This is key I think. I look forward to your posts on this subject! m.i. material.infantacy
hi m.i, I need to read your posts again to see to what extent they might contain the "indicators of information" that I think are relevant. First time through I just read them for what they are without trying to do any deeper analysis. But it seems that you and Elizabeth have in mind a similar approach, which is to define a concrete system which if achieved can be said to demonstrate the presence of information. Is that fair? It will be interesting to see if there can be a meeting of the minds. My complaint about Elizabeth's operational hypothesis is that it could not be extended to any other system which we know to be indicative of information. So I'm concentrating more on trying to say what is common to such systems. I guess it's a difference between a bottom up and a top down approach, lol. Meet you in the middle! Mung
Thanks Mung. I think the concrete approach is superior because it removes any ambiguity about what should be accomplished, and frames the problem in terms of what we observe. Unfortunately the solution Elizabeth is hoping for is unobtainable in those terms. material.infantacy
Hello all, Ive just returned home from the weekend, and see quite a bit of action on this thread. I will catch up and respond tomorrow. Thanks. Upright BiPed
Circularity: A Trip Down Memory Lane Because it's probably going to come up again. Elizabeth Liddle @96:
However, what dictionary says the word means is irrelevant. Dictionary definitions are not prescriptive, they are a record of usage. I have no quarrel with any usage of the word “information” nor with any dictionary definition. Why should I? If the word is used in those ways then that is the way the word is used.
Why does this principle not likewise extend to words such as sign, symbol, and representation? Elizabeth Liddle:
So if information is defined in terms of “representations” or “symbols” then those words, in turn have to be definable in ways that do not use the word “information” or that do not use words that themselves are defined in terms of the word “information”.
If that's what the dictionary says that's what the dictionary says. Elizabeth Liddle:
For example one dictionary definition of “symbol” is: A thing that represents or stands for something else, esp. a material object representing something abstract. So we turn to “represent” and we find: “to serve as the sign or symbol of”. So we turn to “sign” and we find: “an object used to convey information”.
So if a symbol is a thing that represents or stands for something else, and representations serve is signs or symbols of something, and signs are objects used to convey information, perhaps the presence of representations, signs and/or symbols ought to convey to us that the provide a means by which we might discover that information is present. If not, why not? The thing represented is not the same as the representation. If your simulation does not take that into account I don't think anyone here will say it demonstrates the generation of information. Note there is nothing circular here with regard to information. These are not definitions of information. These are "signs" of information. Elizabeth Liddle @126:
If the sense in which ID proponents use the word requires “intelligence” to be part of the definition, then the original ID claim -that only intelligence can generate information is vacuous. True, I can’t refute it, but who would bother?
You're confused. You are going to show us how Chance + Necessity sans intelligence can generate Information. If you do that, you will have refuted the ID claim. You will have demonstrated that neither mind nor intelligence is required. Elizabeth Liddle:
In that case the meaning of the word “information” in the claim must have a definition that does not assume the truth of the claim!
You are mistaken. If you are successful, you will have shown that the definition used by ID is lacking in that it fails to include Chance + Necessity as being capable of generating information. material.infantacy:
If ID proponents define information as that which is the product of intelligence, and if that’s troubling to you, it seems to me that the refutation is conceptually simple: demonstrate that what we call “information,” can be generated blindly, and seek ye not a definition but rather a concrete example (one that we can’t deny the presence of “information” in, and yet for which the cause is the thing to be determined) and find out whether the cause for it can be blind.
Well said. Elizabeth Liddle:
I agree, MI, and in effect, that’s what my operationalistion does.
I've seen from you an "operational hypothesis," is that what you mean by an operational definition of information? Isn't your operational hypothesis that Chance + Necessity sans intelligence can generate information? So now don't we need an operational definition that we can use to say that information is present? Mung
100 dots/100 symbols, Mung. Grr... You wrote:
And the frequency of dots (dots: non-dots) in your sample is 100%.
DOTS/NON-DOTS. Your words. It's really annoying to show why someone has not made the point they think they made only to have them say that what they said was not what they meant. Please try to be more precise in what you say, because what you say is what I have to go on. Mung
Well, not an "A" because that in itself is a symbol so there's a whole other translational stage involving people. But a sequence of bits that comes to "represent" something useful that happens, where "represent" means "brings about". We've got to be careful with "represents" because it's hard to define without landing back at "information" again. But I like the way you are going. (What am I saying....?) Elizabeth Liddle
Lizzie, I want to spend some time reading through MI's posts, which I've put on the back-burner for too long. But so far you seem to agree with me, and I'm pretty sure you reached this same point with Upright BiPed, that at a minimum we need to be able to discern representations. How are we coming so far? So for example If I was examining a computer, I might discover that a specific sequence of bits on the hard disc represents an 'E'. What is the material mechanism of Chance + Necessity that can bring about the state of affairs by which a sequence of bits comes to represent an 'A'. That would be the first hurdle and the first clue that we are in the presence of information. Agree, disagree? Mung
100 dots/100 symbols, Mung. duh. But as you say, it was a rhetorical point. We are now on the same page. Elizabeth Liddle
100 dots / 0 dots = an infinity of dots. Ilion
p.p.s. I just try to program the calculation into my computer. Divide by zero error! 100 dots / 0 dots = ????????? http://en.wikipedia.org/wiki/Division_by_zero Mung
p.s. So no, you won't get 0 bits of Shannon information. GIGO. Mung
Elizabeth Liddle:
And the frequency of dots (dots: non-dots) in your sample is 100%. So, the probability of each symbol being a dot in the string, as we go along the string, is 1. And if you plug that into Shannon’s formula you’ll get zero bits.
And you will have just abused Shannon's equation to score rhetorical points. But one thing at a time. How do I count the number of non-dots? I can walk through the parking lot at the local shopping center and count the number of red cars and the number of blue cars, and calculate the frequency of red cars to blue cars and vice versa. I can count the total number of cars and calculate the frequency of red cars to non-red cars. But how do I go through the parking lot and count the non-cars and calculate the frequency of cars (dots) to non-cars (non-dots)? Absurd. Any other time-wasting mental exercises you want me to perform? Mung
Exactly Mung. We agree. We have never disagreed. As I have been telling you over a gazillion posts in a gazillion threads! This disagreement is no more! It has ceased to be! It's expired and gone to meet its maker! It's a stiff! Bereft of life, it rests in peace! If you hadn't nailed it to the perch we could all have a saved a lot of bandwidth! THIS IS AN EX-DISAGREEMENT!! Elizabeth Liddle
Elizabeth Liddle:
If you were to find a row of 100 dots somewhere, and you had some reason to think it was the relict of some old message, how much information, in bits, would you say it contained?
Have you not been paying attention to a word I've written, lol? NONE! I don't believe information is contained in material objects. Let me refresh your memory. https://uncommondesc.wpengine.com/intelligent-design/since-you-asked/ Post #1. ME: They do not CONTAIN information. At best the dots can be arranged in such a way that they represent something. Of all the ways that bits of matter can be arranged, how many of those arrangements end up as an arrangements of symbols on a blog and do not have as their source an intelligent cause? Mung
The screen is irrelevant Mung. My point is quite extraordinarily simple. If you were to find a row of 100 dots somewhere, and you had some reason to think it was the relict of some old message, how much information, in bits, would you say it contained? Now you have no information as to whether dots are very rare possible symbols or very common ones, or from what population of symbols they are drawn. You only have that sample. And the frequency of dots (dots: non-dots) in your sample is 100%. So, the probability of each symbol being a dot in the string, as we go along the string, is 1. And if you plug that into Shannon's formula you'll get zero bits. However, if I now tell you (independent information) that the row of dots represents EKG output samples and that a dot represents zero voltage, and that a whole other range of symbols reprents non-zero voltage, then suddenly that message has a lot of meaning. It also now has a lot of Shannon information, because now each item represents a huge reduction in uncertainty. But the thing that made the initial string jump from No Bits to Loads Of Bits was nothing intrinsic to the string, but to the information you had about the source of the string. So where is that information located? In the string? Or in the verbal information I gave you? Well, as you will agree,it's a stupid question, because information isn't intrinsic to a message, it's a property of a communication process in which that message forms a part, in this case between you and me, using the string. So to get back to my little project: I am proposing to demonstrate that a communication process of a type that qualifies as information by the criteria of at least someone here, can be generated by Chance and Necessity only. And what I propose is to start with a population of non-self-replicators, let a self-replicating virtual critter emerge from the physics-and-chemistry of my starting conditions, plus random motion, and hope that those critters will contain sequences that map to specific functional effects. For example, let's say that a particular sequence in some of my critters is ABDC and that when that particular sequence is present, a sequence of events is triggered that increases the fidelity with which that critter self-replicates. Other critters, otherwise very similar, but which do not have that particular sequence, self-replicate less faithfully, or disintegrate before self-replication can occur. Now I would say that if I could do this, and if an independent person could look at my critters and say: Aha! yes, look, the ones with ABDC sequences do better, because look, on those critters, another sequence tends to form next to the ABDC sequence, and that sequence then detaches and attaches to the virtual "membrane" of the cell and makes it a bit tougher. And look! the ones with BBDC sequences also do rather well! But this sequence tends to promote the formation of a cutting sequence that splits the membrane at point in the virtual temperature cycle that maximises the chance that the whole thing will divide at the optimimum time. Now that would, I suggest, satisfy Meyer's definition - an arrangement of something that produces specific effects, in this case actual functional effects that promote the replication of the virtual critters. Now: if I did this, would anyone think that a basic tenet of ID had been breached? If not, why not? Because I can keep on trying to make it harder for myself :) Elizabeth Liddle
Elizabeth Liddle:
An EKG monitor. But I just gave you extra information.
I wanted to make sure we were talking about the same thing. So the receiver is someone watching the monitor, correct? A person. I just wanted to distinguish between that and the case where the monitor is the receiver and the source is the heart or the sensors used to feed the monitor. So here's a description: http://www.webmd.com/hw-popup/ekg-components-and-intervals Now in Shannon's paper he makes explicit reference to such screens as information sources. So I am a bit puzzled as to why you think he didn't cover the topic in his paper. Mung
MI:
If ID proponents define information as that which is the product of intelligence, and if that’s troubling to you, it seems to me that the refutation is conceptually simple: demonstrate that what we call “information,” can be generated blindly, and seek ye not a definition but rather a concrete example (one that we can’t deny the presence of “information” in, and yet for which the cause is the thing to be determined) and find out whether the cause for it can be blind.
I agree, MI, and in effect, that's what my operationalistion does. In fact I even asked at one point: if I did this, and it resulted in this, would you be satisfied? But I'll try again. It's a good thought. Thanks. Will also look at your other link. Elizabeth Liddle
Hi Elizabeth, a quick response, and a request. Rhetorically stated: how can you define something you're trying to explain? Doesn't the definition follow the cause, by necessity? It seems to me that the point under contention is this: information is either rooted in mind, or can arise via physical processes. If ID proponents define information as that which is the product of intelligence, and if that's troubling to you, it seems to me that the refutation is conceptually simple: demonstrate that what we call "information," can be generated blindly, and seek ye not a definition but rather a concrete example (one that we can't deny the presence of "information" in, and yet for which the cause is the thing to be determined) and find out whether the cause for it can be blind. Apologies for the hastily composed points above. Elizabeth, I won't hold you to a point by point response to my long-winded post beginning at #114, but will instead ask this of you: From the diagram I provided in #116, will you look and see if you can identify any circularities? So that you don't think I'm being sneaky, I'll say that there's at least one, and that you or others here might be able to identify more than one; but there's at least one glaring circularity and it's specific to no definition of information whatsoever. Enjoy the rest of your Sunday, m.i. material.infantacy
MI: lots of interesting points in your posts, which I hope I'll have time to address later (though things are getting tight), but I'd like to take you up on one point:
But Elizabeth, if it does require a mind, and you’re asking for a definition of information which doesn’t, we have a problem; and that’s what we’re trying to establish — whether information requires a mind — so your quest for this definition seems to undercut what you want to demonstrate.
Well, yes, but that's the whole point! If the sense in which ID proponents use the word requires "intelligence" to be part of the definition, then the original ID claim -that only intelligence can generate information is vacuous. True, I can't refute it, but who would bother? My assumption is that the original ID claim is NOT vacuous, that it actually means something, testable. In that case the meaning of the word "information" in the claim must have a definition that does not assume the truth of the claim! What I suppose I'm really trying to do here is not so much define my claim, but define the ID claim, or one of them, in a form that can actually be tested against my counter-claim. But so far I haven't had anyone willing to stand by even a conceptual definition that is neither narrowly circular (information is to do with representations which is to do with information) or renders the entire ID claim circular (intelligence is required to generate that which intelligence is required to generate). At this stage, as Meyer's book, Signature in the Cell, has been widely recommended to me, I'm inclined to contact Meyer and see if he will stand by the ID claim when expressed according to the Merriam-Webster definition that he cites. Unless someone else will provide one first! The silly thing, from my PoV, is that as far as I can see, I've produced both a conceptual and operational definition that satisfies all Upright BiPeds requirements, and yet no-one will sign off on it! And I honestly don't know why. Elizabeth Liddle
Mung:
Does your example have an information source? What is the source for the message in your example?
An EKG monitor. But I just gave you extra information. Elizabeth Liddle
WR:
So what’s the answer?
What's the period? Mung
Elizabeth provided an example which she thinks demonstrates 0 bits of Shannon information in a message:
An example of meaningful message with 0 bits: http://toolstolife.com/images/content/flatline.jpg It’s the lack of bits that conveys the meaning.
So I asked:
Please identify the section [of Shannon's paper] which deals with your example.
She responded:
When I last read it, there wasn’t one.
Elizabeth, Does your example have an information source? What is the source for the message in your example? A Mathematical Theory of Communication Mung
Mung,
DSo if you know the period…
So what's the answer? WilliamRoache
Merriam-Webster:
Frequency: 2a : the number of times that a periodic function repeats the same sequence of values during a unit variation of the independent variable 2b : the number, proportion, or percentage of items in a particular category in a set of data
I was referring to sense 2b. hth. Elizabeth Liddle
Elizabeth Liddle:
In a string of 100 ones, what is the frequency of ones?
Do you know what the word frequency means? What is the period? The period is the reciprocal of the frequency. So if you know the period... Mung
MI, thanks for your very substantial responses! I will try to respond in detail later. Meanwhile, you wrote:
Elizabeth, you seem very sweet and I hope you’ll understand if I don’t come over and register. I prefer the safe, warm environment under the wings of our loving and diligent moderators. xp
Fair enough :) Although I intend to be a "loving and diligent moderator" to the best of my ability over there too! And the slower pace might suit.... But no problem. I'm happy to converse on this thread. (Although it's a shame we derailed the original, which is actually very interesting!) Elizabeth Liddle
Oh, dear, kf, we do seem to be at cross-purposes! While you are battling Emily, I will try to figure out why :) Good luck! Elizabeth Liddle
Dr Liddle: Pardon. With a 6 hr power cut looming, let me quickly note on your 107:
that needn’t stop us establishing criteria by which to decide what does, and does not, qualify as “information” for the purposes of ID’s claim (that it cannot be generated by Chance and Necessity only).
By now you should have seen wha tis above or already detailed elsewhere that takes on just that project. Similarly, you should be acknowledging the point that the whole project of Shannon and co was in the context of a prior recognition of meaningful signal vs meaningless noise, distinguished by observable characteristics on something as "simple" and direct as a cathode ray oscilloscope. Please reflect on teh background tothe eqn: Chi_500 = I*S - 500, bits beyond the solar system threshold. For this directly answers the question you keep asking while you keep failing to seriously address the proposed answer. Good day GEM of TKI kairosfocus
...continued... So I’m proposing that the target of your simulation should be analogous to the informational core of a living cell. It needs virtual DNA which codes for virtual proteins, and that at least one protein in particular needs to be present both in form and specification at the same time: the RNA polymerase analog. Now I think you can do away with tRNA as it’s just an adapter (as remarkable as its presence is) but it will need to be replaced with a virtual RNA polymerase. This can be a “black box” but it must be specified in the DNA, and it must also be present in the earliest form of the target proto-organism. I’m willing to bet you’re not going to accept this as valid, so I’ll try not to belabor the details too much. See this diagram (I hope the link works): A potential analog to the information core of a living cell. Ra and Rc represent RNA polymerase, with sequence length n, in both the abstract and the concrete forms. Rc is a black box which takes a strand of DNA as input and outputs a sequenced protein. In the proto-organism, it needs to be already present in order for the system to function (I might argue that so does DNA polymerase if we’re going to establish that the organism can validly reproduce). Rc must also be encoded into the DNA strand, and it needs to be the first thing that is, because without it, the clock can’t tick, so-to-speak. It functions as a file header of sorts, a protocol, with which to bootstrap the rest of the organism. If someone were to come across a strand of this DNA, they should be able to bootstrap it by decoding the header (the valid sequence for Rc) and assembling the enzyme, which could then catalyze the production of the other proteins coded for in the strand. I believe that at a minimum, something like this would need to be “evolved” in order to demonstrate that specified complexity can be generated via blind processes. There would still be questions that needed to be answered, such as: what should the sequence length n be for Rc (and Ra); and how is function determined. This is significant because, I suggest, that if you define a function, e.g., “permutation x of a sequence of length n is defined as the function for Rc,” then you’ve smuggled in specification; and if you determine function based on the actual sequence for RNA polymerase, then you’re left with the same impossible search for a function that I suggest is present in the OOL problem. If you define your own search for a function, the sequence length would still need to be long enough to be validly measured against CSI, and so you would still have a search problem -- one that dwarfs the number of atoms in our universe multiplied by every Planck time quantum state that’s ever occurred in its history. Thanks much for your time, Elizabeth, The bloke, m.i. material.infantacy
...continued...
hmmm. If it’s just philosophical, then we aren’t doing science
While I understand the humor here, and even chuckled at it, I hope you understand that design detection is separate from the philosophical implications of that which constitutes a mind. I don’t mean to seem sensitive, but this type of conflation between ID and its implications is one of the frequent and ill-founded arguments used to implicate design detection as being scientifically invalid.
So let’s keep the “aboutness” here very specific: a message is “about” something if it, to use Merriam Webster, “produces specific effects”.
I’m obligated to add that a message is about something if it can be used to produce specific effects in another medium to which it has no direct physical connection, by way of an intermediary. The intermediary can have a physical connection to both the message and the result, but the message and the product need to be physically unrelated, like wood to a magnet. Now I’d like to be clear about something unrelated: I’m not somebody who can speak for the ID movement. This is just “some bloke’s opinion,” and that bloke happens to be me.
You are wrong
I’m glad. =D I hope you understand why that crosses the minds of visitors here though, that you might be trying to squeeze out a definition of information that would allow you to smuggle it in, or disregard the significance of “meaningful.”
Hence my new blog, for which I am just about to compose my first post Free (virtual) beer for all on opening day:
I could really stand to knock back a few of the real thing right now! Elizabeth, you seem very sweet and I hope you’ll understand if I don’t come over and register. I prefer the safe, warm environment under the wings of our loving and diligent moderators. xp Not to mention, I think that this is all riding on a definition of information that’s divided along the separation of world views, the way that tax cuts are divided along party lines in the US legislature. So I don’t expect to be arguing all the fine points ad nauseam, especially considering your lightning fast pace, and my lack of expendable time.
That’s fine. In fact, I think that CSI has an important relationship with UBP’s concept, but that UBP’s concept is better. Dembski’s concept is that design can be inferred if CSI is detected in a pattern. UBP, rightly IMO, says no – the information transfer process is intrinsic to the concept.
This may be because of the unique requirements for proving that specified complexity has been generated, and not so much about using the CSI metric to account for the presence of specified complexity itself. Think of it this way. I could open my IDE and write a program which generated long strings of sequenced symbols, and then define, either explicitly or implicitly, a function for those symbols in the context of computer code, much the same way a compiler tokenizes source code and generates machine instructions. Given sufficient string length, the presence of CSI could be established, but it would not be blindly generated. So while we could potentially determine the presence of information by applying the CSI metric, we couldn’t determine its source: mind or blind. For that, we need a concrete example. The only suitable one is the thing which inarguably contains gobs of specified complexity but which arguably could be either designed or blind. Continued... material.infantacy
Elizabeth, Thanks for your response, and for taking the time to explain your position on my various statements. I’m going to provide an illustration later in this post which demonstrates what I think would prove the blind generation of specified complexity. I don’t think you’ll like it. xp First I’ll comment on some of your responses.
I’d say that the transcription process (getting from RNA to amino acid) is entirely dependent on a physical process – the locking of a tRNA molecule to the only codon it can, and the locking of the only amino acid that can lock on to the other end of that molecule.
That may very well be the case, that the transcription process is entirely dependent of a physical process, and I certainly wouldn’t argue that in the context of this discussion, if at all.
I see nothing “non-physical” about this, nor any “break in the causal chain” as UPD also put it.
Indeed, the process of transcription displays no break. The paradox arises while crossing the threshold between self-replicating molecules and the proto-sef-replicator. RNA polymerase needs to exist before it can transcribe the protein, but it must also be encoded into the DNA. However the specification by definition needs to precede the concrete product, yet without a fully formed RNA polymerase, the specification can’t be translated. Think of RNA polymerase as the protocol. Regarding your elucidation of tRNA, thanks, very educational.
So the real informational magic, as I see it, is not the transcription process itself, which is perfectly physical, but in the DNA sequences that gives rise to one of the sets of tRNA molecules that could do the job. And, interestingly, coding a tRNA molecule from a DNA sequence is directly physical.
I can agree with that. Again however, it’s RNA polymerase that needs to be accounted for; and while the translation process from DNA -> … -> polypeptide can be considered physical, it would be more difficult to account for the embedded sequence for RNA polymerase which would need to co-exist with the already sequenced and folded enzyme itself.
Not sure what you are saying – DNA is a molecule! Could you explain?
I should have been more clear that I was describing an OOL scenario, and depending on your approach, you wouldn’t necessarily need to account for it. I withdraw the comment, and if it becomes relevant again I’ll explain in more detail.
Yes, and as I said, that’s why I’m trying to avoid high-level specifications and terms like “search”. I’m trying to get this thing down to basics: start with a population of non-self-replicators and end with a populations of self-replicators that contain a pattern sequence that promotes self-replicators. In other words, are capable of Darwinian evolution – reproduce with variance where the variance affects ability to replicate.
I don’t think you’ll be able to avoid search, as it’s part of the problem. If you’re not searching for a function, you’re defining one (as far as I can tell) and defining function is smuggling in specification. I’m happy to be corrected on this point!
When I first made my claim, I had in mind a GA, because GAs do produce information by many definitions, and the counter-argument often made that they do so by “smuggling” information in in the form of a fitness function is, I think, invalid.
I’m of the fitness-function-smuggles-information variety, because it narrows the search by way of input parameters. But that’s neither here nor there for your project as you already pointed out, so I’ll not open Pandora’s box on that one. I’ve seen it discussed here before on multiple occasions.
That’s the circularity I’d bet you’re attributing to UB’s definition of information.
Not quite. The circularity is in UB’s definitions (various) of information. For a definition to be operationalisable it cannot be circular.
I’ll restate that, defining the word “information” in this context, in a way which satisfies everyone, is unlikely because how one understands information in the context of specified complexity appears to relate to one’s world view. That’s my judgment anyhow.
So if information is defined in terms of “representations” or “symbols” then those words, in turn have to be definable in ways that do not use the word “information” or that do not use words that themselves are defined in terms of the word “information”.
However the representations are specified and complex, as are the things they represent, taken in regard to each other. The circularity becomes evident here, but it’s present in what we observe, before we ever try to define it. I hope I can make this more clear later in the post.
For example one dictionary definition of “symbol” is: A thing that represents or stands for something else, esp. a material object representing something abstract. So we turn to “represent” and we find: “to serve as the sign or symbol of”. So we turn to “sign” and we find: “an object used to convey information”.
Let’s temporarily put aside the definition and look at the object that we’re trying to explain the information content of, because I’d also try and suggest that information is present in the set of symbols as well as in the proteins they represent. They are both specified and complex, and we know this because of their interdependent interaction. Isolate either one, define the contents as information, and you should have no problem writing a simulation that can generate the sequences, as I had hyperbolized previously, “all day long,” especially if you get to define the terms of that which constitutes “function.”
However I think I got round that by specifying an inert intermediary in the transcription process.
It’s not the intermediary which needs to be inert, it’s the symbols; and they need to be inert because they can’t be directly physically connected to what they represent. A photograph of an object or a drawing of the same object, do not by necessity give rise to the object. They both describe the object, and could be used as a catalyst for some other effect, which would require a go-between (perhaps fitting in place of UB’s protocol requirement). I’ll provide an example if this doesn’t make sense to you. So I’m suggesting that your intermediary needs to be representative of RNA polymerase, and its specification must, of course, be embedded in the DNA. I’ll explain more later which will hopefully illustrate this.
Now, I have no problem with the belief that information requires an abstraction that in turn requires a mind. What I do have a problem with is any definition of information that insists on such an abstraction! Because that really is circular, and that was my main complaint in UBP’s definition – that he could not readily provide me with a criterion for “abstraction” that did not assume his own conclusion.
But Elizabeth, if it does require a mind, and you’re asking for a definition of information which doesn’t, we have a problem; and that’s what we’re trying to establish -- whether information requires a mind -- so your quest for this definition seems to undercut what you want to demonstrate. Continued... material.infantacy
But out of interest Mung: In a string of 100 ones, what is the frequency of ones? Elizabeth Liddle
When I last read it, there wasn't one. Mung, are you going to retract your accusation that I was "selective with the truth" about Todd Wood or support it? Because it's not very nice to go around making unsupported accusations of dishonesty, and then refusing to retract them when challenged to support them. Elizabeth Liddle
Elizabeth Liddle @77:
An example of meaningful message with 0 bits: http://toolstolife.com/images/content/flatline.jpg It’s the lack of bits that conveys the meaning.
Here's a link to Shannon's paper: A Mathematical Theory of Communication Please identify the section which deals with your example. Thanks Mung
Mung (and Upright BiPed): how did Nirenberg et al demonstrate that there was information in the genome if they didn't have any criteria for deciding whether they'd found it? And if they did, what were those criteria? Elizabeth Liddle
Allen_MacNeill @70:
I thought we had already agreed that both Shannon information and Kolmogorov information were essentially meaningless information.
Ah, no. That would not be correct.
While it is possible for a bit string to have meaning, such meaning is completely outside the measurements that result from the application of Shannon and Kolmogorov information theory.
Yes. But how does it follow, logically, that Shannon information is devoid of meaning (meaningless)? It appears as if you're saying the measurements have no meaning. From a prior discussion:
Elizabeth has asserted that Shannon information is a measure of reduction in uncertainty . In order for a reduction in uncertainty to be measurable there must be an expectation. That expectation must be changed by the measurement. That expectation must be an expectation about what one believes to be the case and the reduction in uncertainty changes what one believes to be the case. It follows that Shannon information is not information devoid of meaning.
Shannon information is itself information about something and is not meaningless.
I thought we had already agreed that Shannon information is essentially meaningless information.
On the contrary, I asserted that the concept of meaningless information is incoherent. I also demonstrated that Shannon information does in fact have meaning. Either or both of these should be sufficient to rebut the assertion that Shannon information is meaningless information. Mung
We could try to establish criteria to decide what qualifies as information, or we could follow the path already taken when it comes to demonstrating the presence of information. One that has been demonstrated to work in practice. wow, I sound more and more like Upright BiPed every day! Upright BiPed:
Nirenberg et al discovered the information in the genome by demonstrating it. They isolated the representations, deciphered the protocols, and documented the effects; the same way that all other recorded information has been discovered.
Mung
Mung:
So I guess it’s ok if we discard Shannon, but that’s hardly going to help if we just end up looking for a replacement way to quantify information.
Welcome home, Mung :) I don't think meaning can be readily quantifiable (well, there are ways, but not terribly good ways, IMO). But that needn't stop us establishing criteria by which to decide what does, and does not, qualify as "information" for the purposes of ID's claim (that it cannot be generated by Chance and Necessity only). Elizabeth Liddle
...the only point I think I’m making is that it’s [Shannon information] not going to give us (at least alone) a useful measure of the kind of information (meaningful information) we want to quantify! Is there any actual disagreement on this?
Ilion:
How does one quantify the intangible and immaterial?
So I guess it's ok if we discard Shannon, but that's hardly going to help if we just end up looking for a replacement way to quantify information. quantity: The measurable, countable, or comparable property or aspect of a thing. Likewise I don't see how you can quantify meaning. What sorts of things are not quantifiable, and why are they not quantifiable? I think you're going to face the same issues with meaning. It comes back to a mind, or intelligence. So basically now you will be saying that you hope to show that Chance + Necessity can generate meaning. How many molecules of H2O does it take to mean water? How many molecules of water does it take to mean a snowflake? Where does information come from? Where does meaning come from. meaning: the end, purpose, or significance of something ouch Mung
Dr Liddle: With Emily we just have to prepare. So long as winds are below 110 mph, I do not concern myself overmuch on that, providing you have a concrete building and reasonable windows and doors. And roof. But rains with a volcano and with heavy deposits waiting to be mobilised, that is a different story. GEM of TKI kairosfocus
Ilion: Before you set out to analyse signals in the presence of noise, you are already long since noticing that physical quantities are modulated to carry signals, and that these are then confused by natural occurrences that are due to various thermodynamics, quantum etc processes, not to mention ckt nonlinearities such as clipping, crossover and related effects, called noise. On the classic CRO, you can compare input and o/p signals and see the growth in grass and in distortion due to high and low frequency noise effects, dispersive media and what not. The good old eye diagram is a great tool for digital signals, degree of open/closed is an index of noise. So, from looking at real world signals you recognise noise and distinguish from signals. Long before you set about measuring it or quantifying information. So, since you know that intelligent signals are impressed on carriers of various types, such as in AM, FM, phase mod, pulse mod, delta mod, pulse code mod etc, you are in a position to then ask, how much info is being carried. Indeed the issue Shannon had in mind was telephone lines and telegraph lines. This is where the info metrics suggested by Hartley (and with Nyquist involved) became significant. You knew you were dealing with signals with information empirically on observation the issue was to quantify. The controlled variation of physical parameters we term modulation impresses info into matter, waves and energy. You can actually show how varying the amplitude of a sine wave can make the signal ride on it, e.g. by varying in effect the amplitude of a power supply to an amplifier. This is actually a form of multiplication, of trig functions, and creates information carrying sidebands around the carrier. Various games can be played with this, such as single sideband, vestigial sideband, double sideband suppressed carrier etc etc. The resulting spectrum can be examined too, in effect shifting from time to frequency domain, courtesy the Fourier transform. I have given the simplest case, AM, which is an analogue technique. Digital modulation is much more sophisticated, and these days digital signal processing techniques do amazing things. But the above should be enough to see that modulation allows us to impress information bearing signals into matter and energy and especially waves. The devices, waves and so on do not know or care about wha tis hapening physically, but the comunicaiton system is based on co=ordinated planned variations of natural phenomena. For that matter, hearing is based on modulating air pressure waves and the detection in the ear is based on exciting hairs sensitive to particular frequencies so that the hearing process is based on in effect a physical Fourier transformation from time to frequency. In turn the intensity of the oscillations triggers the pattern of nerve pulses processed onwards to be then perceived as intelligible sound, words or whatever. vibrations in the air are just vibrations subject to the same physical laws as when a tree falls in a forest with no one near to hear the crash. But, inject intelligently controlled vibrations and intelligently interpreted vibrations, and, magic! But equally, the sound info processing stem could be a computer control system. No conscious involvement of a mind, but there is an intelligent coordination in a system. Point is, we see some very familiar patterns in the living cell. Very sophisticated stuff, in a high contingency thing that is way beyond the resources of the observed cosmos. Points to design, save for those too willfully resistant to see it. GEM of TKI kairosfocus
How does one quantify the intangible and immaterial?
By adding together all the god particles? Mung
KF @ 94:What the Hartley-Shanon metrics are about is how do we quantify the info that we already recognise as present. To do that, they are looking at essentially relative statistical frequencies in typical signals, and then using the relative frequency as a probability measure.” How does one quantify the intangible and immaterial? ‘Shannon Information’ is agnostic as to whether the signal is or is not intended to signify information. Had the concept of ‘Shannon Information’ been invented prior to the discovery of the Rosetta Stone, it would still have been possible to generate ‘Shannon Information’ about any hieroglyphic text. Ilion
kf: best of luck with Emily! I'm still not at all sure why this Shannon thing is even at issue. What point does anyone think I'm making? Because the only point I think I'm making is that it's not going to give us (at least alone) a useful measure of the kind of information (meaningful information) we want to quantify! Is there any actual disagreement on this? Elizabeth Liddle
Communication in the Presence of Noise Following Nyquist[1] and Hartley,[2] it is convenient to use a logarithmic measure of information. If a device has n possible positions it can, by definition, store logbn units of information. The choice of the base b amounts to a choice of unit, since logbn = logbclogcn. We will use the base 2 and call the resulting units binary digits or bits. 1 H. Nyquist, “Certain factors affecting telegraph speed,” Bell Syst. Tech. J., vol. 3, p. 324, Apr. 1924. 2 R. V. L. Hartley, “The transmission of information,” Bell Syst. Tech. J., vol. 3, p. 535–564, July 1928. Mung
OK, guys, here's the link: http://theskepticalzone.com/wp/?p=1 All welcome, and it'll save derails and goose-chases here. Feel free to comment here (i.e. I'm not trying to make "home free") but at least we'll have a thread to ourselves. Cheers Lizzie Elizabeth Liddle
Dr Liddle: Back from packing up in prep for possible TS Emily. Re 91:Shannon information is not a measure of meaning. Nope, to recognise that you have signal, not noise -- remember all that stuff about implicitly recognising that signals and noise are different above? -- is a recognition of meaningful function in there somewhere. What the Hartley-Shanon metrics are about is how do we quantify the info that we already recognise as present. To do that, they are looking at essentially relative statistical frequencies in typical signals, and then using the relative frequency as a probability measure. Then, to get additivity [a key intuitive property], do the log reciprocal probability thingie, yielding I = - log p. I think you are putting the cart before the horse, in a context where this is not your field. I cut my electronics eyeteeth on the good old D 52. Information is not to be confused with an arbitrary state of affairs. If you make up a string of 1,000 trays each with a coin and toss, the string is a state of affairs. It only becomes informational when you assign a meaning to H/T, then in effect record the pattern in the string. Now, what we have -- assuming fair coins, is a 50-50 odds for each tray. There are 2^1,000 possibilities (more than can be searched out by the whole observed cosmos running for its thermodynamic lifespan). By far and away most of the arrangements will be near 50-50 distributions in no particular order, and that is what a typical sample by actual tossing will be overwhelmingly dominated by. The ASCII code patterns for the first 72 characters in this post, or any other post in the thread, will be utterly unlikely -- and unrepresentative -- of the overwhelming pattern of the distribution. That is, we here defied a narrow and UNrepresentative zone of interest T that is strictly possible on chance but so overwhelmingly unlikely, that this macrostate is going to be drowned out by the no particular pattern macrostate. If we come by and see the coins int eh pattern of the first 72 characters of the post or another post in the thread, we have excellent reason to infer on this alone that the best explanation -- actual cause being sight unseen -- is intelligence. For, on the chance alternative we would only reasonably expect what is typical not what is atypical. (Remember the sample that the whole cosmos we observe running for its lifetime could produce would be less than 1 in 10^150 of the set of possibilities. Such a relatively small sample would only reasonably be expected to capture the typical, not he atypical, by chance; the atypical would be practically impossible by chance. And BTW, this is also pretty much the same basis for the second law of thermodynamics, statistical form.) I hope this sinks home this time around. Last time I did this, you tip toed by quietly and went on to your usual points. GEM of TKI PS: Mung, I hope this helps you too. kairosfocus
Shannon information does not tell us how much meaning there is in a message or indeed whether there is any meaning at all contained in a message. Does that answer your question? So Shannon information cannot tell us whether there is such a thing as information devoid of meaning. It does not even address that question. And since Shannon information itself has meaning, it does not and cannot provide an example of "meaningless information." Do you understand that Shannon information itself is meaningful, that Shannon information is not itself devoid of meaning, but that the meaning of Shannon information is disassociated from the meaning of the message? Mung
To all following the Upright BiPed vs Lizzie cage fight: Let's back up. First of all, we are discussing a fundamental claim made by ID proponents, namely that Chance and Necessity cannot generate information, where "information" might mean "complex specified information", "functional information" or some other defined meaning of "information. However, what dictionary says the word means is irrelevant. Dictionary definitions are not prescriptive, they are a record of usage. I have no quarrel with any usage of the word "information" nor with any dictionary definition. Why should I? If the word is used in those ways then that is the way the word is used. What matters is how people who make that ID claim are using the word when they make that claim. I think the claim is false, for any regular English usage of the word information. However, I won't attempt to demonstrate that until we have at least one ID proponent who is willing to define the word in the context of that claim, in other words, provide a definition of that word for which they believe the claim is true. Now, clearly, nobody is making the claim for Shannon entropy, as that would be easily falsified. Dembski's concept of "specification" is all about narrowing down the set of Shannon-rich patterns to those for which he considers "Design" a reasonable inference, by insisting not merely on a large amount of Shannon information ("complexity") but also a large amount of compressibility ("specificity"). And I was anticipating that the definition I'd be getting was something like CSI. Dembski's claim is that Chance and Necessity cannot generated CSI (or could only do so with such remote probability that the possibility is not worth entertaining). However, Upright BiPed suggested something much more along the lines of Meyer's quoted definition from Merriam-Webster, in which "information" is not a property of a pattern, as with CSI, but the property of a process. This makes a lot more sense to me, as I've said, and would mean that the ID claim, which I set out to refute, is: Chance and Necessity cannot create information, where information is arrangements of things that have specific effects. So we have protocol in there now - information is not just a pattern but a pattern that has effects. And not just any effects - effects specific to a pattern. In other words there is a mapping between pattern and effect. However, Upright BiPed also made an additional caveat, which is that to be true information, the mapping has to be achieved via an inert arbitrary intermediary pattern of some kind (as is done by tRNA in a cell). And in addition, I made the caveat that the specific effects should probably be functional in some way - e.g. promote faithful self-replication. And so the ID claim becomes: Chance and Necessity cannot generate information, where information consists of arrangements of something that produce specific functional effects by means of inert intermediary patterns. And if ID proponents here are willing to stand by that claim, I am willing to attempt to falsify it. If not, please supply an ID claim that you are willing to stand by. I will post an edited (for more general consumption) of this of this summary post on my blog, and you are all welcome to discuss it further there, rather than derail any more threads here. Cheers Lizzie Elizabeth Liddle
Are you going to answer my question Mung? Elizabeth Liddle
Elizabeth, let me try to put this as clearly as I can: Shannon information is not a meaningless measure. Do you agree? ;) Mung
Elizabeth Liddle:
If you define information as necessarily the product of intelligence, what on earth are ID proponents claiming?
First and foremost I'm claiming that your objections against Upright BiPed are absurd. The facts are right there in front of you but rather than face them you cry circular! Now you have yourself have stated (please let me know if this is yet another of my "absurd misrepresentations of your positions") that there is in fact information in the genome according to any commonly accepted definition of the term information. Webster's:
the communication or reception of knowledge or intelligence
That's their very first definition of information. But you don't like that one. So when I say:
If a prerequisite for information is intelligence, then we’ll just have to come up with a different definition of information.
It's not exactly without basis in fact, now is it. Further, on Upright BiPed, you write:
Not quite. The circularity is in UB’s definitions (various) of information.
I don't think that's fair. Upright BiPed has not been giving you various definitions of information. What he has been doing is telling you how to recognize it. The things you need to look for. And therefore, if someone is to look at your simulation these are the things which, if they see them in your demonstration, will convey to the satisfaction of everyone here that you have in fact generated information. This discussion for some reason seems very difficult for you, and yet to others of us it is very plain. If we are to say yes, you've generated information, these are the things we expect to see. So we've given you all you need. Now it is up to you to proceed. The basic concepts are all there and have not changed: representation protocol effect If you feel you need to "operationalize" these further then please proceed. No one here is standing in your way. If in your attempts at further operationalization, when you delve deeper into these concepts, you find that they lead back to intelligence, so be it! The circularity is not on our part and cannot be blamed on UPB. He did not make up these concepts. IF there is a real gap here you might ought to take pause and consider whether your proposed simulation can in fact do what you claim. And if it cannot, it is not our fault. The fault is all yours for (over-)confidently asserting that you can do something which is not logically possible. I swear that this is one of the very first points that Upright BiPed raised to you. The logical impossibility of your task. You disagreed with him. Well, have at it! The ball is entirely in your court. Mung
Mung, let me try to put this as clearly as I can: Shannon information is not a measure of meaning. Do you agree? Elizabeth Liddle
Elizabeth Liddle:
But I absolutely agree with you – that “reduction in uncertainty” is crucially dependent on what you expect!
Expectation? Reduction in uncertainty? Those terms sound familiar. Is it in fact the case that Shannon information is not information that is devoid of meaning after all? The problem here is that people confuse Shannon information, which is information about something, with what Shannon information is in fact about. Shannon information is not about the meaning of the message, but from this is does not at all follow that Shannon information is devoid of meaning. Shannon information, like all information, is about something and has meaning. So what is Shannon information about and what is the meaning of Shannon information? When Allen and Lizzie can answer this, then perhaps a conversation about information can continue. Until then, confusion will reign. kairosfocus:
PS: 111, under circumstances that normally specify p in each digit’s case being 0.5, leads to a bit value of 3 { 3 * [ - log_2 (2^-1)]}, associated with 8 possible configs.
symbols. expected frequencies. possible messages. selected message. reduction in uncertainty. information about something. not meaningless information. On the right track. Can we get agreement? Mung
"Surely you had something else in mind when leveling your accusation that I was engaging in absurd misrepresentations of your positions. But what?" I'm thinking something like, "Your (accurate) representations of my positions tend to highlight their absurdity[, and that angers me]." Ilion
Elizabeth Liddle:
Mung, I really can’t tolerate your absurd misrepresentations of my positions any longer.
ok, let's take them one by one. Mung: This is the woman who asserted that she could generate 100 bits of “Shannon information” by simply tossing a coin 100 times. Lizzie: Indeed. Look it up. Indeed look it up in Dembski. Well, there's one "absurd misrepresentation" that isn't. Mung: This is the same woman who believes that meaningless information is a coherent concept. Lizzie: It’s perfectly coherent, just useless for our purposes. There's another "absurd misrepresentation" that isn't. Mung: Secondarily, Dr. Liddle reasons that since she can string together a sequence of symbols and transmit them as a message and measure the “information content” using Shannon information that there can in fact be such a thing as meaningless information. Lizzie: In Shannon terms, of course, yes. So that was the third "absurd misrepresentation" that isn't. Mung: Elsewhere Dr. Liddle has argued that a message could be sent and that a measure of the Shannon information could be made according to which the message contained 0 bits of information. Lizzie: Yup. It’s the basis of Shannon’s theory. And that's the fourth "absurd misrepresentation" that isn't. So you've accused me of absurd misrepresentations of your positions, yet agreed with each of them. How utterly bizarre. So what on earth are you talking about? Surely you had something else in mind when leveling your accusation that I was engaging in absurd misrepresentations of your positions. But what? Mung
Mung, are you serious? Or are you in fact trolling UD? I do sometimes wonder. But, on the assumption you are serious: If you define information as necessarily the product of intelligence, what on earth are ID proponents claiming? That the product of intelligence must be the product of intelligence? Are you really suggesting that ID is that vacuous? Elizabeth Liddle
I’m obviously not going to define information in terms of intelligence, as that would be circular.
I’m obviously not going to define information in terms of intelligence because that would expose how silly my proposed falsification of ID is. If a prerequisite for information is intelligence, then we'll just have to come up with a different definition of information. Mung
Not sure quite what you mean there, kf, but I certainly absolutely accept that the relevant signals here are "meaningful, functional" signals. As intelligence is what is at issue, I'm obviously not going to define information in terms of intelligence, as that would be circular. Demski infers intelligent design from the pattern; he does not define the pattern as intelligent information then infer intelligence, yes? Elizabeth Liddle
Dr Liddle: Do you accept the context for the whole discussion from Hartley & Shannon on? Namely knowing and distinguishing meaningful, functional, intelligent signals from meaningless noise? GEM of TKI kairosfocus
Mr MacNeill et al: The Shannon type metric provides a way to quantify information based on symbol frequencies in known meaningful messages that fit into a communication system [as he represented in his famous diagram]. As opposed to meaningless noise that interferes with the working of the system. Indeed, signal to noise ratio is a key quantity in the field of study, and one that Shannon uses in his channel capacity theorem; a major target of his research. In short the whole theory of information in communication systems is premised on an implicit assumption that we can reliably, and empirically, distinguish signals from noise; and indeed, assign power levels to both. That is normally done by looking at characteristics of both, e.g. noise often appears as flickering high-frequency "grass" or low frequency distortion on a CRO trace, and we can observe the effects of more and more of it. [I recall here, my first observations of this on the well known old D52 Telequipment CROs of 1960's vintage, then the astonishing performance of the classic Tektronix 465. Solid old machines, those.] Going to the next level, a spectrum analyser will often show signals standing out of such a grassy background too, like a mountain [often a mesa] or in some cases a spike. That is a background context that is implicit in the discussion in Shannon's paper, but which will not be readily evident to those who are not coming from the background of having to deal with real telecomms signals and noise on the ground. To my mind, this seems to be a source of much of the exchanges above. But, let us draw out a first point of reference:
KP 1: information theory rests on the implicit, empirically grounded premise that one may distinguish meaningful signal from meaningless noise, i.e it rests on an inference to design, right from get-go.
Get used to it. In that context, Shannon et al have analysed information metrics based on Hartley's suggested I = log (1/p) = - log p, where p comes from statistical frequency observations of typical message symbol occurrences. Printers long knew that e is about 1/8 of typical English text. The old printer's devil is an illustration of noise, latterly present in the ever unwelcome typo in blog comments. Of course we can do a weighted avg H = - SUM on i of pi log pi, which per the math implies that for a flat random distribution we get a peak. That is a simple property of the metric, not its defining essence. Real world messages that do a job of work in a real system, will not have that sort of distribution, though they may approach it. An artifact of a metric should not be confused with what it is and means, it is just a limitation that may in some cases be useful in analysis. (For instance, it plays a role in applying the theory to statistical thermodynamics. All of which was discussed at some length here at UD months ago, on the MG incident.) So, key point no 2:
KP 2: the metric is not the message, and . . . KP 3: functionally specific messages are OBSERVATIONALLY distinct from noise. KP 4: Functionally specific messages are similarly distinct from orderly patterns imposed by mechanical necessity, e.g. in a crystal.
Having cleared the deck of such misconceptions, we can now look at something serious. Measuring info on I = - log p, and understanding that in various ways such as vulnerability of already observed function to noise injection [and other related ways . . . ] where S is a dummy variable for such specificity, we may evaluate a log reduced form of the Dembski type Chi metric: Chi_500 = I*S - 500, bits beyond the solar system threshold. Something like this post in this thread will have a high I value (as can be easily approximated from 7 bits per ASCII character, much as Shannon did with his paper),and as a contextually responsive text in English [notice the basis in observation by an intelligent and knowledgeable observer, implicit in all scientific work . . . ] -- and thus highly vulnerable to noise -- S is 1 too. The case would go positive easily, and the inference to design is also confirmed by direct observation. By contrast, an equal length string of flat random characters would take the same I value, but would NOT be specific, so S = 0. The metric would be - 500, automatically. Such a string would of course be in principle generated through a white noise to text program, fed by say zener noise filtered to get flat randomness. The metric expression would tell us, and by overwhelming probability the flat randomness and absence of English text features etc will reliably tell us what is going on -- that he result is credibly chance and or necessity in action. Another sign of the reliability of the metric. (NB: a human imitating such a pattern will typically NOT give a flat random result, though the gibberish will be readily apparent. Long flat random distribution text strings are hard to get.) The by now familiar Wiki article on Infinite Monkeys summarises what chance can do, and this shows the significance of the threshold:
One computer program run by Dan Oliver of Scottsdale, Arizona, according to an article in The New Yorker, came up with a result on August 4, 2004: After the group had worked for 42,162,500,000 billion billion monkey-years, one of the "monkeys" typed, “VALENTINE. Cease toIdor:eFLP0FRjWK78aXzVOwm)-‘;8.t" The first 19 letters of this sequence can be found in "The Two Gentlemen of Verona". Other teams have reproduced 18 characters from "Timon of Athens", 17 from "Troilus and Cressida", and 16 from "Richard II".[21] A website entitled The Monkey Shakespeare Simulator, launched on July 1, 2003, contained a Java applet that simulates a large population of monkeys typing randomly, with the stated intention of seeing how long it takes the virtual monkeys to produce a complete Shakespearean play from beginning to end. For example, it produced this partial line from Henry IV, Part 2, reporting that it took "2,737,850 million billion billion billion monkey-years" to reach 24 matching characters: RUMOUR. Open your ears; 9r"5j5&?OWTY Z0d...
A chance random walk rewarded by trial and error can search a space of about 10^50 possibilities [as Borel long ago identified as a sort of threshold for the lab scale], with some difficulty, but one of 10^150 is an entirely different matter. The problem is that 73 ASCII characters is a very small scope indeed to write serious algorithms to function in information controlled processes. Indeed, once we look at known self-replicating forms, we see that we are dealing not with 500 bits but more like 100,000 or more. By an obvious and well substantiated observation, DNA is functionally specific, digital information, and it is well beyond the relevant threshold where it is reasonable to infer to design not chance and necessity as material causal factor. That is the problem that is being ducked behind all the waves of objections that we keep on seeing, for months on end now at UD. If the objectors are serious, they should be showing us on empirical evidence, how the control systems in question to make a metabolising self-replicating molecular nanotech automaton starts out at at most say 50 - 100 ASCII characters equivalent, and that there is an empirically well warranted path from that to the sort of systems we observe in living cells. Then, that there is a similar branching tree pattern path with empirical support, leading to major body plans. Such has of course long since been conspicuously missing in action. That is why we see so much fussing over definitions and artifacts of metrics that turn molehills into imagined mountains. It is time to either come up with something that makes empirically grounded sense or conclude that modern thought on evolution made a major blunder when it brushed aside the co-founder, Wallace's framework of Intelligent Evolution. The evolutionary materialist tail is plainly wagging the origins science puppy. GEM of TKI kairosfocus
kf:
Nope, the example actually shows just how contextual and mental information is. A flatline takes its meaning from the implied contrast with what happens with a living case. That is, this is a case that has GONE flatline, and indeed the image you show gives the transition.
Right. But as nitpick: if your EKG was on a loop, and you walked past the patient's bedside and saw the flatline, with no other signal, then you'd still know - from the very predictability of the signal - that the patient had had a cardiac arrest. But I absolutely agree with you - that "reduction in uncertainty" is crucially dependent on what you expect! Which is why I think it is useless in this case, and why CSI needs to incorporate specificity as well as complexity to be an indicator of "something interesting going on". Elizabeth Liddle
kf re Shannon: right. But IMO Shannon information is irrelevant here, because UBP is not using CSI as his information metric. If he was, it would be relevant. And perhaps we can get back to our own conversation about CSI some time :) Cheers Lizzie. Elizabeth Liddle
MI (That's easier to type!):
Elizabeth Liddle, pardon my intrusion. Here’s some food for thought. How does aboutness evolve? Any simulation which purports to show that specified complexity can be generated via chance and necessity would have to solve one of the same core issues with OOL, and that is how the specification for a series of independently specified functional proteins becomes embedded into a medium (DNA) which appears to have no physical dependence on the proteins it represents. How does one generate both the functions, and the independent specifications for those related functions; because both are necessary for demonstrating the presence of specified complexity, I would say.
OK, and that seems to be something that Upright BiPed is also insisting on, so I have incorporated it into hypothesis. I think there's a bit of a terminology problem though, as I did say in some post somewhere, regarding the "no physical dependence". I'd say that the transcription process (getting from RNA to amino acid) is entirely dependent on a physical process - the locking of a tRNA molecule to the only codon it can, and the locking of the only amino acid that can lock on to the other end of that molecule. I see nothing "non-physical" about this, nor any "break in the causal chain" as UPD also put it. What is much more interesting is that the set of tRNA molecules available in the cell are only a small subset of the total number of possible tRNA molecules (20*64 of them, i.e. 1280). If all of them were present in the cell, coding would be impossible, because for every codon, there would be twenty tRNA molecules each with a different amino-acid lock at the other end, and which protein got made would be completely independent of the codon sequence. However, what in fact we have in the cell are a subset of these - 61 of them in fact, and what is crucially important about that subset is that there is the codon ends are all unique. Not a single tRNA molecule shares a codon end with any other, even though several share the same amino acid end. Now, there are a large number of subsets of the total possible set (20*64) that would do the job (ensure that every codon specified one amino acid), but what all those subsets have in common is that they are subsets in which each codon is only represented once, and every amino acide is represented at least once. So to summarise: the size total set of possible tRNA molecules is 1280. The size of the subset that will ensure that each codon codes for only one amino acid is 61 (not 64 because 3 are used only for stop codons, although presumably this could be different too). And the size of the set of sets that could do the job just as well is, by my estimate, vast (haven't yet figured out just how vast!), although the size of the set of sets that couldn't is orders of magnitude vaster. So the real informational magic, as I see it, is not the transcription process itself, which is perfectly physical, but in the DNA sequences that gives rise to one of the sets of tRNA molecules that could do the job. And, interestingly, coding a tRNA molecule from a DNA sequence is directly physical.
Beginning with a self-replicating molecule, there would need to exist a relationship between the elements of that molecule and the inert medium (DNA); while at the same time there would need to exist a corollary relationship between the elements of the molecule and the functional proteins which are specified within the inert medium.
Not sure what you are saying - DNA is a molecule! Could you explain?
Take away that pesky independent specification and I’ll write programs all day which simulate the spontaneous generation of “functional complexity,” because I’d get to determine the conditions under which a sequence would be considered functional; and that would be smuggling the specification into the simulation, which would save me from writing about a billion lines of code.
I guess, at least if I understand what you are saying. That's why I'm proposing to do it the hard way :) I'm not even going to start off with self-replicators. They will have to emerge from non self-replicators (although obviously I will set up my physics'n'chemistry in such a way that I think self-replicators are a likely consequence). And my criterion for functionality is simply anything that promotes faithful reproduction.
One can’t merely begin generating permutations of a sequence space and attribute function to those permutations, because there would need to be a definition for the function, that is, a specification. We’d need to know why those permutations correspond to a functional arrangement, when they’re supposed to be arbitrary; at least, one would need to specify which ones have function if necessity couldn’t be explicated and the functional sequences mapped out. Meaning, or aboutness, would need to be assigned to those permutations, and that would be begging the question by assuming a specification in order to demonstrate one.
Right. I think. Which is why I want to bypass as much of that kind of specification as possible and simply attempt to produce a starting environment, without self-replicators, in which self-replicators emerge, and which do not simply replicate their non-functional pattern (which would be very easy, and by some definitions would satisfy criteria for information transfer), but in which their pattern itself promotes successful replication (and critters with some patterns will therefore replicate more often thatn critters with another). However, this alone would not satisfy Upright BiPed's criterion, because he wants that pattern to promote its effects by means of an inert intermediary. That's a harder bar for me to climb, but I'm willing to give it a go.
This is an important point. For any given sequence space I think we can agree that a small number of the permutations would have function. So if we define that function as part of the simulation, we fail because we’ve provided a specification in order to demonstrate one. If we choose instead to represent reality, we need to find existing functions that are part of the valid protein sequence space, and in this case we’re left with an impossible search.
Yes, and as I said, that's why I'm trying to avoid high-level specifications and terms like "search". I'm trying to get this thing down to basics: start with a population of non-self-replicators and end with a populations of self-replicators that contain a pattern sequence that promotes self-replicators. In other words, are capable of Darwinian evolution - reproduce with variance where the variance affects ability to replicate.
If I’m overstating or misinterpreting the problem, let me know.
Well I think I know what you are saying. When I first made my claim, I had in mind a GA, because GAs do produce information by many definitions, and the counter-argument often made that they do so by "smuggling" information in in the form of a fitness function is, I think, invalid. The fitness function in a GA is the analog of the environment in nature - it presents the problem-to-be-solved. The evolving population does the solving, and those solutions are highly informative! Indeed they are put to practical use. But I'm not doing that here, and I'm not even starting, as a GA does, with a minimally functional population of self-replicators. My self-replicators are going to have to assemble themselves from the starting conditions, and somehow find a way of arranging their sequences (if you will excuse the teleological language) so that their probability of successful self-replication is maximised.
So if the target of a simulation is a functional system that demonstrates specified complexity, it would need to have inherent specification and function, and they’d need to be independent, and they’d need to be a result of the same blind process, and we couldn’t ever assume or define the meaning of one in the context of the other, or we’d be begging the question.
Indeed.
That’s the circularity I’d bet you’re attributing to UB’s definition of information.
Not quite. The circularity is in UB's definitions (various) of information. For a definition to be operationalisable it cannot be circular. So if information is defined in terms of "representations" or "symbols" then those words, in turn have to be definable in ways that do not use the word "information" or that do not use words that themselves are defined in terms of the word "information". For example one dictionary definition of "symbol" is: A thing that represents or stands for something else, esp. a material object representing something abstract. So we turn to "represent" and we find: "to serve as the sign or symbol of". So we turn to "sign" and we find: "an object used to convey information". We also, I think briefly, had a circularity problem IMO with UBP's inclusion criterion of "a break in the causal chain". As my claim is that there is no break in the causal chain between Chance-and-Necessity and information, that would have been assuming my non-conclusion! Or rather, assuming UBP's/ Ditto with "break in the physical chain". However I think I got round that by specifying an inert intermediary in the transcription process. But UBP doesn't seem to want to comment on that. grrrrrrr.
The circularity isn’t present in the definition, but rather in the system itself: it’s paradoxically interdependent, from a material standpoint, so of course any definition of information which accounted for what we observe in living systems would appear circular to you. The specification needs to be independent of the concrete product, but in any system which is the result of a blind process, it couldn’t have any independence — everything would be the result of the same process.
I think this is important. I don't think there is any circularity in what I propose.
The problem you seem to be having is with the fact that there can’t be anything abstract about a physical system. When you observe the inner workings of a cell, you see merely a system acting out physical laws — and it quite possibly is, there being no theoretical problem with describing its operation via physical processes — but it’s not the operation of the thing that needs to be explained, it’s the origin of it
Exactly. That's why I want to start without even a self-replicator.
(which I believe requires an abstraction; and I believe that abstraction requires a mind). The problem that needs to be solved is how to arrive at specified parts which comprise a functionally integrated whole, which is what we observe in living systems.
Now, I have no problem with the belief that information requires an abstraction that in turn requires a mind. What I do have a problem with is any definition of information that insists on such an abstraction! Because that really is circular, and that was my main complaint in UBP's definition - that he could not readily provide me with a criterion for "abstraction" that did not assume his own conclusion. However, as he seems to regard the way that a very specific subset of tRNA molecules maps each codon to one, and one only, amino acid as an example of an abstraction/non-physical-link/break in the causal chain, I have used general description of that kind of process as one of the criteria my demonstration has to meet.
What you really need to be explaining is how aboutness comes about via blind processes. That’s a tall order, and I believe that of anything your simulation might usefully do, it wouldn’t solve the CSI “problem” unless it could explain the origin of aboutness, or demonstrate that aboutness is epiphenomenal to this category of sufficiently complex material arrangements.
And if, by "aboutness" you mean, for example, a sequence that results in a functional effect, namely, more efficient self-replication, then I agree, and that is what I propose to try and do.
Demonstrate that aboutness is a property of matter, and you’ve explained the most troubling aspects of mind, OOL, and CSI. This is a philosophical problem I believe, not an algorithmic one.
hmmm. If it's just philosophical, then we aren't doing science :) And ID is science, right? So let's keep the "aboutness" here very specific: a message is "about" something if it, to use Merriam Webster, "produces specific effects". So a flatline EKG, while it may contain no Shannon bits, producess very specific effects, namely the summoning of the cardiac arrest team. A DNA molecule is "about" something because the arrangement of its nucleotides "produces specific effects" namely the assembly of proteins from specific sequences of amino acids. And, in my proposed demonstrations, my virtual critters will contain strings that have specific effects - the arrangement of parts within those strings will at least partially determine how faithfully the critter self-replicates.
I’ll say with a degree of confidence that this is an impossible mission, but I’m not against trying, as long as it’s an attempt to deal with the real problem, and not just another hackneyed variation of equating mere complexity with specified complexity.
Which is why I won't even start until I have my project signed off by at least one ID proponent!
Specified is the key word here; so explain the origin of specification — of aboutness — or show that it doesn’t exist, and you’ve won. Set yourself to this task, and never mind trying to squeeze out of ID proponents a definition of information that’s loose enough for you to simulate what you’ve quite possibly already determined you can do. (I think there’s a good chance you already know how you expect to solve the problem and you’re looking for a definition of information that will allow you to do it, claiming victory. I hope I’m wrong and that your sincere and congenial demeanor is genuine. I’m certain you wouldn’t be long satisfied with a victory that rests upon the technicality of defining that which is challenging to define, like “life” or “consciousness”).
You are wrong :) No, I'm not looking for a definition of information that will supply me with a Get Out Of Jail Free card :) I realise that I am almost certainly suspected of doing so in some quarters, which is why I have been (from my PoV) battling with the onslaught of an army of strawmen - accusations that I am trying use a definition that excludes "meaning" (I'm not, and I've made that explicit a gazillion times); that I am turning down perfectly good operational definitions that have been offered and agreed to (I'm not - none of what has been offered and/or agreed has been an operational definition and what I have offered as an operational definition has not been agreed); that I am avoiding the threads where this is being discussed (nope; I confess great difficulty in trying to conduct this kind of conversation on this kind of site, especially when threads seem to close for comments at semi-random! and the conversations themselves have to find a new OP to derail in order to continue....) Hence my new blog, for which I am just about to compose my first post :) Free (virtual) beer for all on opening day:
I’ll say with all sincerity that if this is a problem which can be explicated via some clever theorizing and even more clever programming along with ingenious algorithmic development, I hope you solve it. That would be something indeed, and worthy of historical immortality. But don’t underestimate the issue by attempting to equate specified complexity with garden variety complexity, or you would be setting out to demonstrate that which is trivially true.
I certainly do not intend to. But Upright BiPed, with whom I have been conducting most of this conversation, specifically (heh) excluded CSI and Dembski's definition from the conversation, saying (IIRC) that it was irrelevant. That's fine. In fact, I think that CSI has an important relationship with UBP's concept, but that UBP's concept is better. Dembski's concept is that design can be inferred if CSI is detected in a pattern. UBP, rightly IMO, says no - the information transfer process is intrinsic to the concept. I'm not sure what UBP would make of Paley's watch :) No matter. And thanks for your help. A fresh mind on this was sorely needed. Cheers Lizzie PS - re aboutness and mind - I do think that is another issue, not entirely unrelated, but related by very many removes. I think we can address this one first :) Elizabeth Liddle
Mung: Please turn down the rhetorical voltage. GEM of TKI PS: 111, under circumstances that normally specify p in each digit's case being 0.5, leads to a bit value of 3 { 3 * [ - log_2 (2^-1)]}, associated with 8 possible configs. kairosfocus
Dr Liddle: Pardon, but you are revisiting old news. We all know that an artifact of the Shannon-Hartley type log metric is that a flat random distribution gives the highest weighted average sum for bits per symbol. However, we also know per empirical study, that real messages DO NOT HAVE THE FLAT RANDOM CHARACTERISTIC; not least because there seems to be an inbuilt redundancy in things that do real things effectively. And so, we would distinguish signal from noise on observable characteristics. Isn't it some weeks ago now that I actually posted s chart from Trevors and Abel of the observable contrast between RSC, OSC and FSC? Functionally specific sequences will be aperiodic [contrasted wit the endless repetitions of say crystallographic order], and will also not be flat-random. But, most of all, they will have a specific, observable function that noise cannot imitate. Beyond a reasonable threshold of complexity, it is utterly, maximally implausibel that such funcitonally specific sequences could be arrived at by random walk searches in a config space, rewarded by trial and error based success detection. This is because islands of function -- per massive observation -- reliably are deeply isolated in config spaces, precisely because of the special requirements of functional specificity. Recasting in sampling terms, we have UN-representative small isolated zones in the space of possibilities, So, relatively small at random samples are not going to be a reliable means of capturing such islands, especially when for instance on the gamut of our solar system -- our effective universe for serious chemical etc interaction -- the number of Planck time quantum states [~10^30 needed for the fastest chem rxns] for the about 10^57 atoms is about 1 in 10^48 of the number of states specified by 500 bits of info storage capacity. That is more than adequate to capture typical behaviours or patterns, but it is vastly inadequate to catch such special zones T with any confidence. Indeed, this is essentially the same analysis that statistically grounds the second law of thermodynamics,and gives teeth to Dr Sewell's observation that merely opening up a system to raw inflows of energy and/or matter does not make highly improbable outcomes suddenly much less improbable. Indeed, I have pointed out in App 1 my always linked how such injections strongly tend to increase the space of possibilities, making the system MORE chaotic! So, the unmet challenge to address the significance of the following expression as already identified -- pardon, but I notice on your part an apparent Wilson, Arte of Rhetorique-style tendency to tip-toe by things you seem to be unable to address on the merits solidly while holding your present views -- comes to the fore again (and as something that can indeed be operationalised, despite your repetitions of the talking point,there is no operational definition . . . ): Chi_500 = I*S - 500, bits beyond the solar system threshold Specificity can be recognised in many ways [start with what is the impact of deliberately injected noise], and we are open to new ones, just show a good rationale. Info carrying capacity I can be measured on the I = - log p type metric, or by the weighted average form of that per Shannon, or even by direct inspection [as in say a USB mem-stick]. 500 is a thershold based on solar system scale resources, and Chi_500 is just a handy label. We assert that where this value is positive, we may reliably infer in cases that we can directly observe, to design as material causal factor. We further assert that this is a well tested inference, with say libraries and the Internet and the software industry in support. We can even assert that there are no known, credible counter-instances, and that routinely claimed "exceptions" like canals on Mars or GAs etc reliably turn out to be examples. So, we may make an inductively strong generalisation that where Chi_500 is positive, we have good reason to infer to design. The controversies do not arise form the above, at least to those who are not playing at selective hyperskepticism. No, they stem lately form consequences, for DNA is a 4-state digital system of high specificity and complexity, which functions in life in well known ways. Chi_500 is easily positive, and this points to cell based life as designed. But in an era where insittutional science is dominated by materialists, that is a very unwelcome inference, so it is stoutly resisted. The quesiton-begging a priori mateialism used to spearhead that resistance, and the want of positive evidence for the claimed chance and necessity driven origin of life and of body plan level biodiversity, multiplied by the invidious assertions of political motivation and associated expulsion tactics, say the merits are on one side and the power is on the other. So, it is clear form what has been going on across the past 10 years, that the materialist reigning orthodoxy is cornered, mortally wounded and dying, but is lashing out viciously along the way. GEM of TKI kairosfocus
Dr Liddle: Nope, the example actually shows just how contextual and mental information is. A flatline takes its meaning from the implied contrast with what happens with a living case. That is, this is a case that has GONE flatline, and indeed the image you show gives the transition. This again brings out the subtleties involved in informational systems. In particular, the "aboutness" of information, and the need of context in understanding it, a context that involves background knowledge, and protocols for structuring and transferring meaning and associated function. A heart beat pattern or an EEG pattern in time are both known to be associated with phenomena of life. Transition to flatline -- note that dynamic aspect -- is associated with the specific phenomenon known as death. Say, a mannequin put under the same electrodes, would not have the patterns to begin with, i.e there was no life there. And if we were to simulate the signals then switch them off for such a manequin, that would itself point to the issue of context. GEM of TKI kairosfocus
An example of meaningful message with 0 bits: http://toolstolife.com/images/content/flatline.jpg It's the lack of bits that conveys the meaning. Elizabeth Liddle
Mung:
Dr. Liddle reasons that because it is true that the meaning of a message is not relevant to the measure being used (Shannon information) it must be the case that the measure being used can tell us that information can in fact be devoid of meaning.
Clearly a measure of information that does not measure meaning cannot be used to show that the information is devoid of meaning. And, of course, I did not say so. In fact, I made it clear (gah, I can't believe I'm still saying this) that because Shannon information is not a measure of meaning it is useless for our purposes. I have no idea why you keep banging on about Shannon information. If you don't like the fact that Shannon information theory and Kolmogorov's information theory do not treat the meaning of information, then your issue is with them, not me. As you well know, of course.
Secondarily, Dr. Liddle reasons that since she can string together a sequence of symbols and transmit them as a message and measure the “information content” using Shannon information that there can in fact be such a thing as meaningless information.
In Shannon terms, of course, yes. In the terms we actually are interested in, of course, no. As I've said about a gazillion times, that's why Shannon information won't work for our purposes. As Meyer also says. As Dembski also says. As Mung agrees. As Lizzie agrees. Why are we talking about this again? Oh, yes, because Mung likes trolling Lizzie. Silly Mung.
While promptly forgetting that Shannon information is agnostic about any meaning in the message!
lol @ "forgetting"
Elsewhere Dr. Liddle has argued that a message could be sent and that a measure of the Shannon information could be made according to which the message contained 0 bits of information.
Yup. It's the basis of Shannon's theory. From good ole wiki:
Suppose one transmits 1000 bits (0s and 1s). If these bits are known ahead of transmission (to be a certain value with absolute probability), logic dictates that no information has been transmitted. If, however, each is equally and independently likely to be 0 or 1, 1000 bits (in the information theoretic sense) have been transmitted.
http://en.wikipedia.org/wiki/Information_theory#Entropy Elizabeth Liddle
Mung:
Your claim was challenged from the moment it was first uttered. You haven’t even begun to give us a reason to think it’s true.
I will rephrase: my claim remains unrefuted. And will remain so until someone can come up with an operationalisable definition of information. Seeing as the entire ID project depends on it, that should concentrate the mind wonderfully. But it isn't very difficult, as Meyer has already given one. All that remains for someone to have the courage to sign off on my operationalisation of it.
This is the woman who asserted that she could generate 100 bits of “Shannon information” by simply tossing a coin 100 times.
Indeed. Look it up. Indeed look it up in Dembski.
This is the same woman who believes that meaningless information is a coherent concept.
It's perfectly coherent, just useless for our purposes.
Now Dr. Liddle has made it clear that she accepts the presence of information in the genome.
Of course. By any definition of information, there is information in the genome.
But what she has so far refused to say is why she believes that information exists in the genome (beyond a nod to “I think it meets any definition anyone can think of for information”).
Shannon Entropy definition: because A DNA strand (or an RNA strand, or a string of amino acids) have a high bit count. Merriam-Webster definition as cited by Meyer: because DNA is an arrangement of something (nucleotides) that produces specific effects (proteins, inter alia). Definition as "a collection of data": DNA acts as a database within the cell, from which the cell can select items as needed. Definition as "knowledge communicated or received concerning a particular fact or circumstance": DNA within a population encodes information about the environment in which it was selected. For example, the broken GULO gene in apes tells us that it was broken in a population with easy access to a vitamin-C rich diet. Information as communication between a sender and receiver enabling the receiver to duplicate the sender's behaviour: Genetic material from one cell duplicated in, or inserted into, a second cell enables the second cell to duplicate the behaviour of the first. Information as code: Nirenberg discovered that each possible triplet of nucleotides bar one specifies an amino acid in a one:more-than-one mapping. Moreover we now know that "non-coding DNA" also specifies particular actions/events in the cell, including whether or not a particular sequence is translated. Information as evidence: DNA traces can be used to identify a criminal in a court of law.
So why should anyone believe that information is present in the genome? If information exists in the genome, how would we go about exposing it, or demonstrating it’s presence?
For all the above reasons, which are, in themselves, demonstrations that it does, by many definitions.
Closed minds want to say lego pieces contain information and can transmit information by being plugged together.
I guess a single lego piece could contain information,in some sense or other, but lego pieces can certainly transmit information if plugged together. Anything that can form a sequence or pattern can be used to transmit information. Elizabeth Liddle
Mung, I really can't tolerate your absurd misrepresentations of my positions any longer. Allen is of course correct, and nothing he has said is in disagreement with anything I have said. Will you please stop mangling my statements and then accusing me of lying. The only problem with my statements is your willful refusal to understand them. I will deal with a few of your more egregious remarks shortly. After a cup of coffee. Elizabeth Liddle
Allen_MacNeill:
BTW, it seems to me that I may have given Ms. Liddle some support, in that her assertion that a meaningful message can have no Shannon information at all is clearly illustrated by Holmes remark to Watson in “Silver Blaze”.
Her claim was not that a meaningful message can have no Shannon information. Elizabeth Liddle:
An operational definition can also give criteria for establishing whether some categorical variable is present or absent. As I said. One that allows you to measure the variable will of course also allow you to say whether it is present or absent. Shannon’s definition does that – if there are 0 bits in the message, there is no information present, and if there are >0 bits in the message, there is information present.
Mung
As a first approximation of what constitutes meaningful information, it seems to me that temporal order is part of it. Hence the difference in meaning between xztojhayfudmolngwpeeqoeudhitrobrcveks and thequickbrownfoxjumpedoverthelazydogs Same bits, different order, entirely different meaning. Same thing with DNA and every other kind of encoded information that I can think of right now. Which brings me to the fact that it's 'way past my bedtime and we've got to get up early tomorrow to continue to pack up for our move to our new house. So, good night all and flights of angels sing you to your rest ("thee" and "thy" are, of course, second person singular...so much singing) Allen_MacNeill
And yes, I realize that I typed "quantitative" in comment #51 when I meant "qualitative". Pun intended, of course... Allen_MacNeill
I thought we had already agreed that both Shannon information and Kolmogorov information were essentially meaningless information. While it is possible for a bit string to have meaning, such meaning is completely outside the measurements that result from the application of Shannon and Kolmogorov information theory. Ergo, I don't need to work out a theory of meaningless information; Shannon and Kolmogorov have already done so. It seems to me that, while there might be a theory of meaningful information, it will almost certainly not be mathematical (i.e. quantitative). Rather, it seems to me that a theory of meaningful information will be entirely qualitative (shades of Robert Pirsig!) Allen_MacNeill
Allen_MacNeill:
And yes, I’m working on a coherent, quantitative, or testable theory of meaningful information. Wish me luck!
Is there any other kind of information? Let me suggest that you first work on a coherent, quantitative, testable theory of meaningless information. Surely that will help you in your quest. If not, why not? Mung
interlude 2
Introduction There was a glorious moment, in the 1960's, when, like children first learning to read, we began to perceive meaning in strings of nucleotides in DNA. Suddenly, we could understand that TTT in a protein-coding sequence of DNA meant that the amino acid phenylalanine would be incorporated into a protein. We could, in our minds, and later with computers directly translate strings of bases, [1] taken as triplets, [2] into strings of amino acids, which then fold up to form three-dimensional proteins. While the challenge remained of perceiving the three-dimensional structure specified by a linear protein sequence, we understood that information was encoded in DNA in a way that was both explicit and linear. – An Overview of the Implicit Genome. Lynn Helena Caporale in The Implicit Genome.
[emphasis mine] Who here would teach their child to read from a book of randomly generated symbols? The very first footnote in the book: [1] Nirenberg, M.W. and Matthaei, J.H. The dependence of cell-free protein synthesis in E. coli upon naturally occurring of synthetic polyribonucleotides. Proc. Natl. Acad. Sci. U.S.A. 47(10): 1588-602 (1961). Upright BiPed vindicated! Mung
lol. I typed in "0 bits of Shannon information" into Google and got a whopping two hits. TWO BITS! Mung
a brief interlude
Most analyses assume that genomes are to be read as linear text, much as a sequence of nucleotides can be translated into a sequence of amino acids by looking in a table. However, information can evolve in genomes with distinct forms of representation, such as the structure of DNA or RNA and the relationship between nucleotide sequences. Such information has importance to biology yet is largely unexpected and unexplored. As described in this volume, much of this information ... ...The examples reviewed in this volume, taken from a broad range of biological organisms, both extend our view of the nature of information encoded within genomes and deepen our appreciation of the power of natural selection, through which this information, in its various forms, has emerged. – An Overview of the Implicit Genome. Lynn Helena Caporale in The Implicit Genome.
[emphasis mine] Now Dr. Liddle has made it clear that she accepts the presence of information in the genome. But what she has so far refused to say is why she believes that information exists in the genome (beyond a nod to "I think it meets any definition anyone can think of for information"). So why should anyone believe that information is present in the genome? If information exists in the genome, how would we go about exposing it, or demonstrating it's presence? Inquiring minds want to know. Closed minds want to say lego pieces contain information and can transmit information by being plugged together. Mung
BTW, it seems to me that I may have given Ms. Liddle some support, in that her assertion that a meaningful message can have no Shannon information at all is clearly illustrated by Holmes remark to Watson in “Silver Blaze”.
The dog sent the message "I did not bark" to Holmes but not to Watson? Mung
Allen_MacNeill:
Consider, for example, the following string of letters: xztojhayfudmolngwpeeqoeudhitrobrcveks There are 37^26 different random rearrangements of these letters (37 letters times 26 different choices per letter). Each rearrangement has the same amount of Shannon information.
Can't we start with a simpler example, lol? I have a coin. I flip the coin. If the coin lands on heads I send you a 1. If the coin lands on tails I send you a 0. What is the Shannon information content of the message "111"? Why? Can we agree that it's 8 bits? 000 001 010 011 100 101 110 111 Two Symbols Equal Probability Message of length 3 3^2 = 8 Now it could be the case that a "1" signifies heads and that the message "111" therefore means "three heads in a row," or it could also be the case that "1" signifies tails and that the message "111" therefore means "three tails in a row." But what the 111 means does not affect the measurement. But does it follow that information can be without meaning? Does it follow that Shannon information itself is meaningless? By no means. In fact, a measurement of 8 bits tells us something quite specific and meaningful. Can you guess what Shannon information tells us? Can you tell us what Shannon information means? Mung
BTW, it seems to me that I may have given Ms. Liddle some support, in that her assertion that a meaningful message can have no Shannon information at all is clearly illustrated by Holmes remark to Watson in "Silver Blaze". Allen_MacNeill
Furthermore, it seems to me that CSI doesn't either, at least insofar as it's defined in Dembski's formulation. I intend to discuss this with him if I can find the time. Allen_MacNeill
And I think we may actually be in agreement: neither Shannon information nor Kolmogorov information have any bearing on the question of meaningful information. And it is the latter that makes all the difference in biology (and in life in general, for that matter...oh, the puns, the puns) Allen_MacNeill
Mung, I agree completely with you that Shannon information is only a measure of what could be called the "bit content" of a string of bits, and is indeed completely agnostic on the subject of the meaning of that string. Indeed, I would go further and assert that the meaning of a string of bits has no direct or necessary bearing on its Shannon information (i.e. the mathematical measure of its essentially meaningless content).* As just one example, consider Conan Doyle's Sherlock Holmes story, "Silver blaze":
"Is there any point to which you would wish to draw my attention?" "To the curious incident of the dog in the night-time." "The dog did nothing in the night-time." "That was the curious incident," remarked Holmes.
In other words, a "signal" with no measurable Shannon information (i.e. the lack of barking by the dog) was nevertheless extremely meaningful in the context of the story. * The same goes for its Kolmogorov information as well. Allen_MacNeill
ooh, things are heating up! Looking forward to a long meaningless weekend! Mung
Hi Allen, I think you're confused. It happens. :) I was confused too when I first starting to think in depth about information. Hopefully we'll have some time to discuss this fascinating topic. First let's talk about the language being used. For now let's concentrate on Shannon information unless you think Kolmogorov information is relevant in a way that Shannon information is not.
There is a fundamental difference between Shannon information and meaningful information.
What specifically is that fundamental difference? Is it that information can be both meaningful and meaningless, and that Shannon information is meaningless? Doesn't that sound a bit suspect? If you stop and think about it, is meaningless information even coherent? What is it that becomes informed by meaningless information, and what does it become informed about? How did the advent of Shannon information fundamentally change what information fundamentally is? You must argue that prior to Shannon information there was no such thing as meaningless information, but then along came Shannon and he gifted us with that most wondrous and useless of things, meaningless information. Surely that is absurd. Shannon does not give us a definition of what information is, or if he does what he gives us is a mathematical definition, and even then it is not a definition of information that is devoid of meaning.
Both Shannon and Kolmogorov information are essentially “meaningless”, in that measures of both forms of information make no reference to what they mean and are simply measures of the quantity of bits in a message string or the compressibility of that quantity of information.
What is the measure of Shannon information? Is it not Shannon information itself that is the measure? So what I'd request that you clarify is this issue of whether Shannon information is a form of information or a measure of information. You appear to be suffering from the same malady that afflicts Dr. Liddle. Dr. Liddle reasons that because it is true that the meaning of a message is not relevant to the measure being used (Shannon information) it must be the case that the measure being used can tell us that information can in fact be devoid of meaning. I hope you can do a better job of reasoning than this. Secondarily, Dr. Liddle reasons that since she can string together a sequence of symbols and transmit them as a message and measure the "information content" using Shannon information that there can in fact be such a thing as meaningless information. While promptly forgetting that Shannon information is agnostic about any meaning in the message! Again, I hope you can do a better job of reasoning than this. Elsewhere Dr. Liddle has argued that a message could be sent and that a measure of the Shannon information could be made according to which the message contained 0 bits of information. You're welcome to take that one up and show how it can be the case if you like. That is the end of my opening statement. ;) Mung
In comment #51, please substitute "meaningful" for "neaingful" in the third line (damn mutations, they'll get you every time). Allen_MacNeill
Re comment #49: On the contrary, Ms. Liddle is correct: listing the heads and tails in a string of coint flips is indeed legitimate as a representation of that string as Shannon information. That is, the actual output of such a series of coin flips (expressed as 1 = heads and 0 = tails) contains a precise amount of Shannon information, which can be calculated using Shannon's equation. It is also the case that Shannon information (as well as Kolmogorov information) is absolutely meaningless. The "meaning" of the bits in a string has no bearing on its Shannon or Kolmogorov information. Neither does it necessarily have any bearing on its complex specified information (CSI), for that matter. If one defines CSI as the amount of information necessary to specify the content and order of a string of bits, the meaning (if any) of that string has no effect on its CSI. For example, this string: xztojhayfudmolngwpeeqoeudhitrobrcveks has exactly the same CSI as this string: skevcrbortihdueoqeepwgnlomdufyahjotzx and neither has any meaning at all. By contrast, this string: thequickbrownfoxjumpedoverthelazydogs has exactly the same CSI as the previous two strings (all three contain the very same letters, just arranged in three different sequences), and yet the third string has all the meaning in the world. Here's another, simpler example, based on Ms. Liddle's example. Consider the following string of 1s and 0s: 01101110010111011110001001 Just looking at it, would one be justified in the assumption that this is a record of a series of 26 coin tosses, in which 1 = heads and 0 = tails? The frequency of 1s = 15/26 = 0.576, whereas the frequency of 0s = 11/26 = 0.423. Seems close enough to a random distribution, right? If your answer is yes, based on the frequency distribution of 1s and 0s (i.e. not 50/50, but fairly close), consider the same string broken up like this: 0 1 10 11 100 101 110 111 1000 1001 Neither random nor "meaningless" any more at all, is it? When I tell you the string of 1s and 0s is the outcome of 26 coin flips, the meaning of the string is completely different than if you perceive it as the first ten numbers in base two (i.e. binary), which are of course completely different than the first ten numbers in base ten (1 2 3 4 5 6 7 8 9 10). Same meaning, completely different strings. Allen_MacNeill
Best of luck Allen, and thanks for the information! material.infantacy
P.S. I invite you to refer to me as m.i.. I tried to change my screen name to something less boorish and indignant, appropriate for more constructive discussions (Mission.Impossible) but was met with discontentment from the moderation filter. material.infantacy
Oops, I meant the second "meaningless" string to be the first one listed in reverse order, but pasted the first string without reversing it. Ah, well... Allen_MacNeill
There is a fundamental difference between Shannon information (and Kolmogorov information as well) and neaingful information. Both Shannon and Kolmogorov information are essentially "meaningless", in that measures of both forms of information make no reference to what they mean and are simply measures of the quantity of bits in a message string or the compressibility of that quantity of information. Totally meaningless strings of bits still have both Shannon and Kolmogorov information. Consider, for example, the following string of letters: xztojhayfudmolngwpeeqoeudhitrobrcveks There are 37^26 different random rearrangements of these letters (37 letters times 26 different choices per letter). Each rearrangement has the same amount of Shannon information. Here's just one example: xztojhayfudmolngwpeeqoeudhitrobrcveks There are 37^26 - 1 more. One way to compress this string would be to rearrange it in a string in which repeated letters are arranged in "clumps" (this is easy to do with Excel). If you do this, the string becomes: abcddeeeefghhijklmnoooopqrrsttuuvwxyz This string consists of 26 "clumps" (perhaps not surprising when one considers that there are 26 letters in the English alphabet). This can be expressed in terms of the Kolmogorov information for this string, which (like the Shannon information) is completely meaningless. Rearranging the letters in the string has no effect on its Kolmogorov information, nor on its Shannon information, neither of which depend on the relationships between the letters, nor on any meaning(s) they might have. However, among the 37^26 different rearrangements of these letters is this one: thequickbrownfoxjumpedoverthelazydogs and this string is packed with meaning at multiple levels, from the meaning of the entire set (at how ever many levels one chooses, including the meaning you associate this string with, such as high school typing class, etc.), the meaning of each word ("the", "quick", "brown", "fox", etc.), the meaning of each set of words ("quick brown fox", "lazy dogs", etc.). None...not one of these levels of meaning is captured by either Shannon or Kolmogorov information, and yet it is precisely this kind of meaning that is woven throughout biology at all levels, from the sequences of nucleotides in DNA to the relationships between major biomes in the biosphere. In brief, we completely lack any coherent, quantitative, or testable theory of meaningful information, and until (or even if) we get one, speculating on the origin of such information is just that: speculation. And yes, I'm working on a coherent, quantitative, or testable theory of meaningful information. Wish me luck! Allen_MacNeill
Elizabeth, appreciated. There's no hurry. I'd add that, I don't pretend to know how to do your job, but I'd like to suggest that your operational definition of information doesn't necessarily need to precede your research, if you choose to model the evolution of the critical aspects of a single-celled organism. Since we all seem to agree that the simplest of living organisms exhibit CSI, I suggest this is a valid approach which is less susceptible to misunderstanding, equivocation, and world view disparity/incompatibility. In other words, modeling/simulating the evolution of a relatively elementary DNA-based organism from simple self-replicating molecules could result in an operational definition of information suitable for publishing your results. With that in mind I propose that the target organism would need at least the following components: -- An inert storage medium representative of DNA which codes for protein sequences. -- Protein-based machinery that performs the task of translating the DNA into protein sequences. This machinery must itself be encoded into the DNA. Requirements: -- The target organism must be able to reproduce itself with perfect fidelity. -- The physical connection between the proto-organism's elements (DNA, protein sequences) and the molecular self-replicator which bears them should be explicit. -- Valid sequences need to be “found” in some way, instead of simply assumed. This will be contentious and would require quite a bit more effort to define parameters for. I’m not suggesting that we begin right here and right now with this methodology in mind, I’m not even sure I’m up to the task. However I am suggesting that this is a potentially valid approach, and invite you to explain why something of this sort wouldn’t be required before providing evidence of the spontaneous, blind generation of information, for which a universally satisfying definition seems as elusive as for any of the phenomena relating to life and mind. I'll risk going a little further and suggest that we shouldn't attempt a model of this sort based on a dictionary definition of a contentious concept; but since we can probably agree that we know CSI when we see it, as with living organisms, preferably the model should deal with parameters derived from the empirical. material.infantacy
Elizabeth Liddle:
First of all: your remark about my “many misconceptions of information” is not only a gross misrepresentation, but insufferably arrogant.
lol. This is the woman who asserted that she could generate 100 bits of "Shannon information" by simply tossing a coin 100 times. This is the same woman who believes that meaningless information is a coherent concept. Mung
The Implicit Genome makes it clear that we need to enrich our conventional view, both of how information is encoded in the genome and of what information a genome might contain. In fact, the functioning genome challenges us even to examine what we mean by "information" and "code." ... ...the word "code" already is heavily overloaded with meaning for biologists and engineers, with genetic code (biology), error-correcting and data compression codes (communications), object, source, and machine codes (computers), encryption codes (cryptography), etcetera, in wide use. While it might be possible to formulate a unified definition of "code" that captures these diverse examples, we think it more useful to introduce a distinct concept that does not assume a linear relationship between sequences and "meaning." Engineering offers the notion of "protocol," which is more general and richer than "code" and lets "code" preserve its existing meanings particular to specialized disciplines. Te engineers, the term "protocol" is the set of rules by which components interact to create a new (system) level of functionality. - An Engineering Perspective: The Implicit Protocols. John Doyle, Marie Csete, and Lynn Caporale in The Implicit Genome.
Mung
Elizabeth Liddle:
However, rather than assist in the process of developing an unambiguous, non-circular wording of your conceptual definition, you [Upright BiPed] kept on insisting that what you had already offered, and what I had already offered back, was a “operational definition”. It was not.
Another Lie. I will GLADLY retract that claim if: 1.) Upright BiPed states that he did in fact: a. insist he had already offered an operational definition, or b. insist that what Lizzie had offered back was an operational definition, OR [What I recall is that he stated that she had enough or had all she needed to develop her operational definition.] 2.) Elizabeth herself can demonstrate that Upright BiPed did in fact insist such to be the case by supplying the quotes and cites to them. I'd really like to offer Elizabeth at least a chance to regain some credibility. Unfortunately, I don't think it's going to happen. Mung
Elizabeth Liddle:
My claim stands unchallenged.
Liar. Your claim was challenged from the moment it was first uttered. You haven't even begun to give us a reason to think it's true. Mung
material.infantacy: Thanks for your post. I'm off to bed in high dudgeon right now, but may have achieved a lower level of dudgeon by morning. And your post deserves more attention than I am in a fit state to give it right now :) See you tomorrow. Lizzie Elizabeth Liddle
Upright BiPed
“It’s useless” Thank you Dr Liddle.. The distinction between scientists such as yourself and others such as Marshall Nirenberg, could not be more vivid. Perhaps he should return his Nobel.
Upright BiPed: the distinction between yourself and Marshall Nirenberg is that Marshall Nirenberg knew how to operationalise a hypothesis. Your evasion the rest of my response to you is noted. My claim stands unchallenged. Elizabeth Liddle
Elizabeth Liddle, pardon my intrusion. Here’s some food for thought. How does aboutness evolve? Any simulation which purports to show that specified complexity can be generated via chance and necessity would have to solve one of the same core issues with OOL, and that is how the specification for a series of independently specified functional proteins becomes embedded into a medium (DNA) which appears to have no physical dependence on the proteins it represents. How does one generate both the functions, and the independent specifications for those related functions; because both are necessary for demonstrating the presence of specified complexity, I would say. Beginning with a self-replicating molecule, there would need to exist a relationship between the elements of that molecule and the inert medium (DNA); while at the same time there would need to exist a corollary relationship between the elements of the molecule and the functional proteins which are specified within the inert medium. Take away that pesky independent specification and I’ll write programs all day which simulate the spontaneous generation of “functional complexity,” because I’d get to determine the conditions under which a sequence would be considered functional; and that would be smuggling the specification into the simulation, which would save me from writing about a billion lines of code. One can't merely begin generating permutations of a sequence space and attribute function to those permutations, because there would need to be a definition for the function, that is, a specification. We’d need to know why those permutations correspond to a functional arrangement, when they're supposed to be arbitrary; at least, one would need to specify which ones have function if necessity couldn’t be explicated and the functional sequences mapped out. Meaning, or aboutness, would need to be assigned to those permutations, and that would be begging the question by assuming a specification in order to demonstrate one. This is an important point. For any given sequence space I think we can agree that a small number of the permutations would have function. So if we define that function as part of the simulation, we fail because we’ve provided a specification in order to demonstrate one. If we choose instead to represent reality, we need to find existing functions that are part of the valid protein sequence space, and in this case we’re left with an impossible search. If I’m overstating or misinterpreting the problem, let me know. So if the target of a simulation is a functional system that demonstrates specified complexity, it would need to have inherent specification and function, and they'd need to be independent, and they'd need to be a result of the same blind process, and we couldn't ever assume or define the meaning of one in the context of the other, or we’d be begging the question. That's the circularity I’d bet you’re attributing to UB's definition of information. The circularity isn't present in the definition, but rather in the system itself: it's paradoxically interdependent, from a material standpoint, so of course any definition of information which accounted for what we observe in living systems would appear circular to you. The specification needs to be independent of the concrete product, but in any system which is the result of a blind process, it couldn't have any independence -- everything would be the result of the same process. The problem you seem to be having is with the fact that there can't be anything abstract about a physical system. When you observe the inner workings of a cell, you see merely a system acting out physical laws -- and it quite possibly is, there being no theoretical problem with describing its operation via physical processes -- but it's not the operation of the thing that needs to be explained, it's the origin of it (which I believe requires an abstraction; and I believe that abstraction requires a mind). The problem that needs to be solved is how to arrive at specified parts which comprise a functionally integrated whole, which is what we observe in living systems. What you really need to be explaining is how aboutness comes about via blind processes. That's a tall order, and I believe that of anything your simulation might usefully do, it wouldn't solve the CSI "problem" unless it could explain the origin of aboutness, or demonstrate that aboutness is epiphenomenal to this category of sufficiently complex material arrangements. Demonstrate that aboutness is a property of matter, and you've explained the most troubling aspects of mind, OOL, and CSI. This is a philosophical problem I believe, not an algorithmic one. I’ll say with a degree of confidence that this is an impossible mission, but I'm not against trying, as long as it’s an attempt to deal with the real problem, and not just another hackneyed variation of equating mere complexity with specified complexity. Specified is the key word here; so explain the origin of specification -- of aboutness -- or show that it doesn't exist, and you've won. Set yourself to this task, and never mind trying to squeeze out of ID proponents a definition of information that’s loose enough for you to simulate what you’ve quite possibly already determined you can do. (I think there’s a good chance you already know how you expect to solve the problem and you’re looking for a definition of information that will allow you to do it, claiming victory. I hope I’m wrong and that your sincere and congenial demeanor is genuine. I’m certain you wouldn’t be long satisfied with a victory that rests upon the technicality of defining that which is challenging to define, like “life” or “consciousness”). I’ll say with all sincerity that if this is a problem which can be explicated via some clever theorizing and even more clever programming along with ingenious algorithmic development, I hope you solve it. That would be something indeed, and worthy of historical immortality. But don’t underestimate the issue by attempting to equate specified complexity with garden variety complexity, or you would be setting out to demonstrate that which is trivially true. material.infantacy
"It's useless" Thank you Dr Liddle.. The distinction between scientists such as yourself and others such as Marshall Nirenberg, could not be more vivid. Perhaps he should return his Nobel. Upright BiPed
Hypothesis: That chance and law alone can give rise to information, where “information” is defined conceptually, according to Merriam-Webster, as “the attribute inherent in and communicated by one of two or more alternative sequences or arrangements of something that produce specific effects.” Operationalised hypothesis: Starting only with non-self-replicating entities in a physics-and-chemistry (plus random kinetics) environment, self-replicating “virtual organisms” can emerge that contain arrangement(s) of virtual matter, represented as strings, which in turn cause the virtual organism to self-replicate with fidelity, and thus determine the output of the system, namely a replication of the “virtual organism”. The arrangement must produce its output by means of an intermediary “virtual object”. This “virtual object” must take the form of a second arrangement of “virtual matter” that may interact with the strings and with some other “virtual object” that affect the fidelity of the self-replication of the “virtual organisms” without either permanently altering, or being altered by, the interaction. Annotated Operationalised hypothesis: Starting only with non-self-replicating entities in a physics-and-chemistry ["Necessity"] (plus random kinetics ["Chance"]) environment, self-replicating “virtual organisms” can emerge that contain arrangement(s) of virtual matter, represented as strings, which in turn cause the virtual organism to self-replicate with fidelity, and thus determine the output of the system, namely a replication of the “virtual organism”. The arrangement must produce its output by means [protocol] of an intermediary “virtual object” [which will translate the "representation" of the output emboded in the string into the output itself]. This “virtual object” must take the form of a second arrangement of “virtual matter” that may interact with the strings and with some other “virtual object” that affect the fidelity of the self-replication of the “virtual organisms” without either permanently altering, or being altered by, the interaction [i.e. the "break in the physical chain"]. Final note: it should be possible for any observer to figure out what sequences produce what effects, a la Nirenberg. I can incorporate that if you want. UBP - I do not believe anything is missing. If you do, tell me what. Elizabeth Liddle
Upright BiPed: until you actually read, and comment on, my operational definitions, however brutally, I see no reason for taking you seriously. And this:
Isolate the representations, decipher the protocols, and document the effects.
is useless. Except for the last three words. Elizabeth Liddle
Dr Liddle, The conceptual definition is one that you yourself wrote out, and I agreed with it. The only other thing you needed was a set of operations to prove the existence of information. Nirenberg et al gave you those: Isolate the representations, decipher the protocols, and document the effects. Upright BiPed
Upright BiPed:
Dr Liddle at 32, If your newfound indignation is any measure, then perhaps I’ve finally succeeded in dragging you out into the light of day. It was not an easy task. However, I would only remind you that your many misconceptions of information (prime example: information is contained in everything) were not my responsibility. You held onto each of them with great tenacity – as you did the utterly laughable idea that Nirenberg’s (and the remaining world’s) method of confirming information was irrelevant to the success of your simulation. It should be no surprise, then, that the tone of this conversation has roughened over the course of the past nine weeks. Good luck with your simulation.
I have to confess to a considerable degree of annoyance at this post, Upright BiPed. First of all: your remark about my "many misconceptions of information" is not only a gross misrepresentation, but insufferably arrogant. I have not harboured "many misconceptions of information". Words can mean many things, and dictionaries are descriptive, not prescriptive. What I have sought, throughout this conversation with you, is to ascertain what you, Upright BiPed, mean when you use the word. And I have to tell you that you, Upright BiPed, do not get to define, prescriptively, all uses of the word "information". No-one does, not Shannon, not Merriam-Webster, not me. I did not "hang on to each with great tenacity". I have said, on several occasions, that I believe my claim would hold for any definition of "information" (as long as it wasn't "hedghog" or "Homer Simpson's mother"). To repeat: what I sought was your definition of information, as in the counter claim to mine: that Chance and Necessity alone cannot bring information into existence. But not only did I require a conceptual definition from you, I required one that was neither ambiguous nor circular, because only when a conceptual definition is neither ambiguous nor circular can it be successfully operationalised. However, rather than assist in the process of developing an unambiguous, non-circular wording of your conceptual definition, you kept on insisting that what you had already offered, and what I had already offered back, was a "operational definition". It was not. I did not expect it to be - it is my job, as the person carrying out the demonstration, to operationalises it with respect to my demonstration, to your satisfaction. But not once have you even commented on my operationalisations. Actually you have appeared to me to studiously ignore them, not even, IIRC even bothering to copy-paste them into your responses to me. I find that, to be generous, puzzling, and to be less generous, indicative of a desire to avoid the nailing down of something that could, in fact, be subjected to rigorous testing. I hope I am wrong, and that in fact it is an oversight. But until I am reassured of the latter, obviously I will not be conducting any simulation whatsoever. What would be the point? Why bother to demonstrate a claim that no-one will commit themselves to disputing? My claim stands: that Chance and Necessity alone can create information, by any definition of information that anyone cares to offer (within the bounds of normal English usage). If someone can provide me with a clear conceptual definition that can be operationalised, i.e. is unambiguous and non-circular (Merriam-Webster's, as cited by Meyer for instance, or indeed my rather more rigorous one, inspired by Upright BiPed, which, I submit, incorporates both his requirement of an inert intermediary translator object, analgous to tRNA, and my own more rigorous requirement that the information must be functional in some sense) then I am more than happy to proceed. I would also be happy to proceed with a definition based on CSI, which was what I originally had in mind. But if no-one is willing to step up to the plate, then I consider my claim unchallenged. I'll post it on my new blog, and if anyone wants to take it up, I'd be more than delighted. There is no catch. All that is required is a unambiguous, non-circular definition of "information" (or "complex specified information" if you prefer). *growls* Lizzie Elizabeth Liddle
The fact that you haven't retracted (a comment that you yourself have demonstrated to be false) is the problem, Dr Liddle. Your earlier attempt to brush it off (as a misunderstanding) did not suffice. However, there comes a point where having you actually retract your remark is less important than the display of you not retracting your remark. We've reached that point, so there is no need for you to concern yourself with it any longer. Upright BiPed
Mung:
But you will return here to post your retraction?
Yes. The fact that you have to ask is the problem. Elizabeth Liddle
Dr Liddle at 32, If your newfound indignation is any measure, then perhaps I’ve finally succeeded in dragging you out into the light of day. It was not an easy task. However, I would only remind you that your many misconceptions of information (prime example: information is contained in everything) were not my responsibility. You held onto each of them with great tenacity – as you did the utterly laughable idea that Nirenberg’s (and the remaining world’s) method of confirming information was irrelevant to the success of your simulation. It should be no surprise, then, that the tone of this conversation has roughened over the course of the past nine weeks. Good luck with your simulation. Upright BiPed
Upright BiPed:
You have to isolate the representations, decipher the protocols, and document the effects – just like everyone else.
Elizabeth Liddle:
I have, however, eschewed words like “decipher” “protocols” and “representation” because all those need to be unpacked and denuded of back-references to “information” which is what we are trying to demonstrate in the first place.
Upright BiPed:
The problem Dr Liddle is that when you remove from your methodology the critical requirements for the existence of information, an effect may arise, but you would have no way to confirm that information caused it. We have been over this before and it’s apparently a subtlety that you either fail to grasp or wish to ignore.
It's getting to the point that I can just copy and paste the responses, lol. I can carry on this "conversation" without either one of them. Upright BiPed and Dr. Elizabeth Liddle, you have both now become irrelevant! Let me see if I have this right yet: 1. Isolate the representations. 2. Decipher the protocols. 3. Document the effects. Almost sounds like a procedure or a set of operations a person could follow to demonstrate the presence of information. Upright BiPed:
Look at your sentence “when it is not the objects themselves, but a arrangement of those objects that produces the specific effects.” What is missing from this sentence?
I know! I know! PROTOCOL!!! Elizabeth Liddle:
Well, an “n” obviously. Otherwise, I don’t know.
It's not that freaking hard to decipher. Mung
But you will return here to post your retraction? Mung
Upright Biped:
Dr Liddle, It seems that the time for comments has been closed on the other (one of Gil’s) threads we took over. That shouldn’t be necessary this time, as I think we’ve said all that can be said. I just wanted to comment on your last post to me. I am sure you will have a response, and then perhaps we can be done with it. Thanks… - – - – - – - – - – - – - Dr Liddle at 231, What makes it information, I think we agreed, is when it is not the objects themselves, but a arrangement of those objects that produces the specific effects. As, for example, when a codon produces an amino acid, not because the nucleotides themselves produce this effect, but because the arrangement of the nucleotides produce this effect. The problem Dr Liddle is that when you remove from your methodology the critical requirements for the existence of information, an effect may arise, but you would have no way to confirm that information caused it. We have been over this before and it’s apparently a subtlety that you either fail to grasp or wish to ignore.
No, Upright BiPed. From where I’m standing it’s a circularity that you fail to grasp or confront :) Or else you are misunderstanding me. I have have willingly and gladly moved from Dembski’s “design detection” model, where you look at a pattern, and diagnose its information content (specifically, its complex specified information content, plus a term to determine whether it could have arisen by chance) to your own criteria which includes a protocol As I’ve said, this makes a lot more sense to me, and I’m not sure why you think I’m ignoring it! Unless it’s because I’m digging my heels in over that little circularity issue – but I thought I’d sorted that out in my latest version.
The subtlety has been demonstrated for you in your last mis-statement, but I think you were unsuccessful in reading my clarification closely enough to actually understand it. So I will try again. Look at your sentence “when it is not the objects themselves, but a arrangement of those objects that produces the specific effects.” What is missing from this sentence?
Well, an “n” obviously. Otherwise, I don’t know. Seems pretty good to me.
You are attempting a simulation of reality. No matter what happens in this simulation, it will be once removed from reality.
Well, up to a point, Lord Copper. I’m basing it on reality, but the only “simulated” part is the physics’n’chemistry, which in nature would be a given, but in the case of my “sim” is designed by me. And I guess the random thing, although random number generators are pretty random. As for the rest, it’s not a simulation at all. It’s instantiated in a computer, not in molecules, but as we agree, information is immaterial anyway, so the instantiation doesn’t matter. In the same way GA s are not “simulations” usually – they are perfectly real evolutionary systems that solve perfectly real problems. But with that proviso, sure. But I think we’d already agreed that my Chance and Necessity parameters were mutually OK.
With this special dispensation in hand, you want to falsify ID by showing that information can arise from chance contingency and physical law.
Well, I want to falsify that claim. There could still be an ID.
But apparently, you want the requirements for information to be once removed from reality as well. You have said that it is unacceptable for you to be held to the same standards of confirming information as all other instances of information ever known to exist.
Um no. I did not say that. Not never not nohow. Contrariwise.
This is, in itself, an amazing claim.
It would indeed be.
Yet all you’ve done to justify this claim is to make the laughable procedural argument that (with those proven standards) you would not be able to tell if information came into being, or not, based on some definitional issue. Of course, both myself and the entirety of recorded history disagrees with you.
Honestly, UPD, I cannot parse this. I literally do not know what you are talking about.
LIDDLE: What I want to know is what, in your view, makes the mapping between a codon and an amino acid “information”, and the mapping between a foot and a footprint not “information”. BIPED: … one mapping is completely disassociated while the other requires a direct physical interaction. In the instance of the relationship of codon to amino acid (with the codon/anticodon being a representation and the amino acid being the effect) they never directly interact. In the case of a foot and a footprint (the foot being the object and the footprint being the effect) the two must directly interact. There is no representation between them, and therefore no mapping.
And as I read that, I thought I understood what you meant. I still think I did. I have just expressed it without the word “representation” which is somewhat problematic as most definitions of it use words like “symbol” or “information” which lands us in circularity. That’s why I like the Webster-Merriam definition, which, it seems to me, captures exactly this point, especially with my addition, but without using terms which require that-to-be-defined to be defined!
You as an observer say there is a mapping between them, but that’s only because (for you) the footprint has become the representation instead of the effect, and its effect is your association of it to the foot that created it. The point here is that within your simulated environment, you could never actually confirm the true existence of information without doing exactly as everyone else (without exception) has had to do. Without this proven methodology, you could not effectively confirm the difference between a simulated footprint and a simulated amino acid. You have to isolate the representations, decipher the protocols, and document the effects – just like everyone else. This is the piece of factual historical advice you simply refuse to accept. Who knows why? A good candidate reason would be the fact that to accept it would be to accept the fundamental reality that two lines of causation (the physical chain of nucleotides, and the physical chain in the resulting polypeptide) have become coordinated by an immaterial, yet observable, phenomenon known as information.
And if you would actually read my attempts to operationalise all this UPD, you would see that this is exactly what I have done. I have, however, eschewed words like “decipher” “protocols” and “representation” because all those need to be unpacked and denuded of back-references to “information” which is what we are trying to demonstrate in the first place. To be specific: the methodology I have proposed is: firstly to establish a correlation between a primary pattern (a sequence) and an output (the fidelity of a replication), and secondly to require that that correlation is achieved via an intermediate pattern that (equivalent to tRNA) that neither permanently alters the primary pattern nor is altered by the process of facilitation. That seems exactly what you require, except that I have expressed it in unambiguous criteria that anyone can check. They can also, if they like, even do a Nirenberg on it, and figure out what features of my pattern sequence map on to what specific outputs (for instance, it might be an object that is attaches itself to my “cell membrane” and prevents it disintegrating too readily). I’ll even put that in specifically if you like.
This break in pure physicality has been documented in all instances of recorded information, and it has to be that way or nothing could represent anything. The letters that spell the word ‘apple’ are not the same as the fruit. A bee’s dance isn’t the same thing as flying off in a particular direction. The computer code for the letter ‘a’ is not the same as the letter ‘a’. A codon is not the same thing as an amino acid; it is not a part of the physical chain of the amino acid – and they never directly interact.
Right. And now that I know that this is what you mean by “break in physicality” then that’s OK. It’s in there. I’ll have the equivalent of a genome, tRNA molecules and amino acids, and those amino acids will help the virtual organism reproduce itself faithfully. OK?
The thing that connects all of these is the presence of physical protocols at the receiver of the information. It is this protocol which establishes the mapping of one thing to another – the arrangement from one causal chain having an effect in the subsequent causal chain, leading to function. The state of the effect has been symbolically represented in a physical object, and that physical object (with its representation in tow) can create that effect by means of a protocol. There is no way to get around this, but in your case, it is certainly hasn’t been for a lack of trying. Unfortunately for you, you’ve worn the wheels off of trying, but the facts remain.
Well, weirdly, you succeed long long ago – you just somehow don’t recognise your own criteria when expressed as an unambiguous operationalisation! But it’s all in there. Hidden in plain sight maybe.
In your last response you actually objected to me adding a footnote to clarify a term that you yourself introduced. This is nothing more than objecting for the sake of objecting.
Well, without knowing what you are talking about I can’t comment. But if I was being an asshole, I apologise. Occasionally I am.
And one can look for other instances of the same. You also objected to having the term “representation” in your operational definition, even though it was part of your conceptual definition.
Um yes. That would be because, er “representation” needs to be operationalised? Oh come on UPD, this operationalisation thing isn’t that hard! And you are a statistician you said, you know how it’s done!
You wanted that language changed to an ‘arrangement that determines’ the effect. I have no particular problem with that, but when you go to revamp the operational definition I offered, you included the word “represented” in your text, which is the past tense root of the word “representation” – the very word you were objecting to. It then becomes obvious that these are fabricated objections.
UBD: you did not offer me an operational definition You may have thought you did, but you did not. I operationalised it, and if I included the word “representation” in my operational definition then I most certainly should not have done. But I don’t think I did. In fact, I’m sure I didn’t.
Given that I cannot influence you to accept the only known method of confirming the existence of information, I suggest you use whatever definition you want. – - – - – - – - – - – - – - – - BTW … Your claim (against ID making its case) has therefore been demonstrated as false by your participation here, and confirmed by your own words. The falsification you seek to create would not be necessary otherwise. You should retract your claim – even though you won’t. - – - – - – - – - – - – - – - – - –
This is all false.
Then: I do realise that many people here, yourself included, consider that there is something fundamentally implausible about information being created by material mechanisms. And now: “I’ll get going, though it may take a while (weeks, at least, possibly months) … But I am not going to demonstrate that there is information in the genome – I’ve already done that using at least two definitions. And I’m certainly not going to demonstrate that it got there through Chance or Necessity.”
What is your point? Is it your perception that there is something inconsistent in these two statements? OK, Upright BiPed: as at least part of the problem with this conversation has been the very difficult nature of the medium – spread over many threads, on different topics, some locked etc, I’m going to set this up on my brand new blog. I’ll post the URL, and anyone is welcome to join me there, and/or comment on it here. That way we can keep the thread open for as long as we want, and I can post updates (if we get that far) once I get going. Cheers Lizzie Elizabeth Liddle
kf; the atheists's response to Euler's identity tis a enigma indeed,,, whereas the correct response to such an amazing thing should be something along these lines; Amazed by Kutless - music video http://www.youtube.com/watch?v=dHXL3__iQ3A ,,,we instead witness a response from atheists along these lines: Nothing to see here - video http://www.youtube.com/watch?v=rSjK2Oqrgic bornagain77
F/N Groov, if you have something new, format it and submit. kairosfocus
PS: Video kairosfocus
Groov: By reworking and hiding the astonishing pattern, you change the subject. That is, you have set up a strawman. Is that how you reason normally? The point is that the Euler equation unites the five most significant numbers in Math in one expression, and by its context draws in all of the world of things that are exponential and frequency related, through the astonishing power of the complex exponential. Even Wiki is moved to admit:
Euler's identity is considered by many to be remarkable for its mathematical beauty. These three basic arithmetic operations occur exactly once each: addition, multiplication, and exponentiation. The identity also links five fundamental mathematical constants: The number 0, the additive identity. The number 1, the multiplicative identity. The number ?, which is ubiquitous in trigonometry, the geometry of Euclidean space, and analytical mathematics (? = 3.14159265...) The number e, the base of natural logarithms, which occurs widely in mathematical and scientific analysis (e = 2.718281828...). Both ? and e are transcendental numbers. The number i, the imaginary unit of the complex numbers, a field of numbers that contains the roots of all polynomials (that are not constants), and whose study leads to deeper insights into many areas of algebra and calculus, such as integration in calculus. Furthermore, in algebra and other areas of mathematics, equations are commonly written with zero on one side of the equals sign. A poll of readers conducted by The Mathematical Intelligencer magazine named Euler's Identity as the "most beautiful theorem in mathematics".[1] Another poll of readers that was conducted by Physics World magazine, in 2004, chose Euler's Identity tied with Maxwell's equations (of electromagnetism) as the "greatest equation ever".[2] An entire 400-page mathematics book, Dr. Euler's Fabulous Formula (published in 2006), written by Dr. Paul Nahin (a Professor Emeritus at the University of New Hampshire), is devoted to Euler's Identity. This monograph states that Euler's Identity sets "the gold standard for mathematical beauty."[3] Constance Reid claimed that Euler's Identity was "the most famous formula in all mathematics."[4] The mathematician Carl Friedrich Gauss was reported to have commented that if this formula was not immediately apparent to a student upon being told it, that student would never become a first-class mathematician.[5] After proving Euler's Identity during a lecture, Benjamin Peirce, a noted American 19th century philosopher/mathematician and a professor at Harvard University, stated that "It is absolutely paradoxical; we cannot understand it, and we don't know what it means, but we have proved it, and therefore we know it must be the truth."[6] Stanford University mathematics professor Dr. Keith Devlin said, "Like a Shakespearean sonnet that captures the very essence of love, or a painting that brings out the beauty of the human form that is far more than just skin deep, Euler's Equation reaches down into the very depths of existence."[7]
Further to this, someone above said math is not about reality. That is a thinly disguised confession of physicalism. Sorry, the power of mathematics in science is testimony to how it does capture the logical structure and relationships of reality in a compact, powerful way. And on this identity and its linkages, we have the whole world of Fourier and Laplace transforms as well as complex number theory with all the amazing things they can do. (Let's just say that I used to have my students go out and do pole-spotting exercises to fix the relevance and power of complex frequency domain analysis in their minds.) That's reality, if anything is! And trying to brush it away simply inadvertently underscores its force. GEM of TKI kairosfocus
Mung, there will not be any retraction. All of her passive nature aside, in the end Dr Liddle diplays the same zero tolerance policy that virtually all other materialists display. (Besides, she's already sniffed out a retraction of sorts - as long as everyone fully understands that nothing said could be understood to mean anything anyone thought it meant, and was really just one momentous misunderstanding). It is what it is. Upright BiPed
That's fine Dr Liddle. But I doubt we have to kill another thread, given that our individual positions are clear. Upright BiPed
But will you be posting your retraction here? Mung
UPD - will respond in detail later, but to let you know: as this conversation is a technical nightmare, with bits of it scattered through unsearchable threads, some locked, I'm just setting up a blog where we can continue with extended conversations. I'll send you a link when I've got a piece up, and people here will be welcome to come and comment. But later today, I'll have a look at your response above, I hope (got some RL stuff to do between now and then!) cheers Lizzie Elizabeth Liddle
Dr Liddle, It seems that the time for comments has been closed on the other (one of Gil's) threads we took over. That shouldn't be necessary this time, as I think we've said all that can be said. I just wanted to comment on your last post to me. I am sure you will have a response, and then perhaps we can be done with it. Thanks... - - - - - - - - - - - - - Dr Liddle at 231,
What makes it information, I think we agreed, is when it is not the objects themselves, but a arrangement of those objects that produces the specific effects. As, for example, when a codon produces an amino acid, not because the nucleotides themselves produce this effect, but because the arrangement of the nucleotides produce this effect.
The problem Dr Liddle is that when you remove from your methodology the critical requirements for the existence of information, an effect may arise, but you would have no way to confirm that information caused it. We have been over this before and it’s apparently a subtlety that you either fail to grasp or wish to ignore. The subtlety has been demonstrated for you in your last mis-statement, but I think you were unsuccessful in reading my clarification closely enough to actually understand it. So I will try again. Look at your sentence “when it is not the objects themselves, but a arrangement of those objects that produces the specific effects.” What is missing from this sentence? You are attempting a simulation of reality. No matter what happens in this simulation, it will be once removed from reality. With this special dispensation in hand, you want to falsify ID by showing that information can arise from chance contingency and physical law. But apparently, you want the requirements for information to be once removed from reality as well. You have said that it is unacceptable for you to be held to the same standards of confirming information as all other instances of information ever known to exist. This is, in itself, an amazing claim. Yet all you’ve done to justify this claim is to make the laughable procedural argument that (with those proven standards) you would not be able to tell if information came into being, or not, based on some definitional issue. Of course, both myself and the entirety of recorded history disagrees with you.
LIDDLE: What I want to know is what, in your view, makes the mapping between a codon and an amino acid “information”, and the mapping between a foot and a footprint not “information”. BIPED: … one mapping is completely disassociated while the other requires a direct physical interaction. In the instance of the relationship of codon to amino acid (with the codon/anticodon being a representation and the amino acid being the effect) they never directly interact. In the case of a foot and a footprint (the foot being the object and the footprint being the effect) the two must directly interact. There is no representation between them, and therefore no mapping. You as an observer say there is a mapping between them, but that’s only because (for you) the footprint has become the representation instead of the effect, and its effect is your association of it to the foot that created it.
The point here is that within your simulated environment, you could never actually confirm the true existence of information without doing exactly as everyone else (without exception) has had to do. Without this proven methodology, you could not effectively confirm the difference between a simulated footprint and a simulated amino acid. You have to isolate the representations, decipher the protocols, and document the effects – just like everyone else. This is the piece of factual historical advice you simply refuse to accept. Who knows why? A good candidate reason would be the fact that to accept it would be to accept the fundamental reality that two lines of causation (the physical chain of nucleotides, and the physical chain in the resulting polypeptide) have become coordinated by an immaterial, yet observable, phenomenon known as information. This break in pure physicality has been documented in all instances of recorded information, and it has to be that way or nothing could represent anything. The letters that spell the word ‘apple’ are not the same as the fruit. A bee’s dance isn’t the same thing as flying off in a particular direction. The computer code for the letter ‘a’ is not the same as the letter ‘a’. A codon is not the same thing as an amino acid; it is not a part of the physical chain of the amino acid – and they never directly interact. The thing that connects all of these is the presence of physical protocols at the receiver of the information. It is this protocol which establishes the mapping of one thing to another – the arrangement from one causal chain having an effect in the subsequent causal chain, leading to function. The state of the effect has been symbolically represented in a physical object, and that physical object (with its representation in tow) can create that effect by means of a protocol. There is no way to get around this, but in your case, it is certainly hasn’t been for a lack of trying. Unfortunately for you, you’ve worn the wheels off of trying, but the facts remain. In your last response you actually objected to me adding a footnote to clarify a term that you yourself introduced. This is nothing more than objecting for the sake of objecting. And one can look for other instances of the same. You also objected to having the term “representation” in your operational definition, even though it was part of your conceptual definition. You wanted that language changed to an ‘arrangement that determines’ the effect. I have no particular problem with that, but when you go to revamp the operational definition I offered, you included the word “represented” in your text, which is the past tense root of the word “representation” - the very word you were objecting to. It then becomes obvious that these are fabricated objections. Given that I cannot influence you to accept the only known method of confirming the existence of information, I suggest you use whatever definition you want. - - - - - - - - - - - - - - - - BTW … Your claim (against ID making its case) has therefore been demonstrated as false by your participation here, and confirmed by your own words. The falsification you seek to create would not be necessary otherwise. You should retract your claim - even though you won’t. - - - - - - - - - - - - - - - - - - Then:
I do realise that many people here, yourself included, consider that there is something fundamentally implausible about information being created by material mechanisms.
And now:
“I’ll get going, though it may take a while (weeks, at least, possibly months) … But I am not going to demonstrate that there is information in the genome – I’ve already done that using at least two definitions. And I’m certainly not going to demonstrate that it got there through Chance or Necessity.”
Upright BiPed
So if I have four apples and I give you two, can I still use math (subtraction) to show that I now have two apples?
There is a difference between "mathematics is about reality" and "mathematics can be used to model reality." Neil Rickert
Further, if math isn't about reality, then I sure hope no one is getting paid too much to teach or use it. I knew those teachers were full of it! ScottAndrews
Neil Rickert writes, "There is rather wide agreement that mathematics is not about reality." So if I have four apples and I give you two, can I still use math (subtraction) to show that I now have two apples? Can I use math to calculate my mortgage payments as well as my interest? If math isn't about reality, then what is it about? Barb
"Not only is the universe mathematical, but it’s algorithmic as well. Just ask any subatomic particle. Or engineer." There's a delightful little short on that theme here: https://uncommondesc.wpengine.com/mathematics/nature-by-numbers-a-wonderful-short-short/ Direct: http://www.etereaestudios.com/movies/nbyn_movies/nbyn_mov_vimeo.htm Mission.Impossible
For more info on "who Denton is" see this series: http://www.youtube.com/watch?v=JDsaZeWrh9U And here: http://www.youtube.com/watch?v=HN54TY0FQt8 He's no friend of Darwinism, and has high regard for the ID thinkers. mike1962
Not only is the universe mathematical, but it's algorithmic as well. Just ask any subatomic particle. Or engineer. mike1962
fn: Not only did the Wheeler delayed choice experiment, referenced in post 9,,,, https://uncommondesc.wpengine.com/intelligent-design/michael-denton-on-mathematics-and-stardust/#comment-392642 ,,, show that material reality is subservient to the mathematical reality that is above it,, but a more detailed 'mathematical analysis' by Wigner has shown that this mathematical reality which governs material reality is actually 'observer-centric" at its core; "It was not possible to formulate the laws (of quantum theory) in a fully consistent way without reference to consciousness." Eugene Wigner (1902 -1995) from his collection of essays "Symmetries and Reflections – Scientific Essays"; Eugene Wigner laid the foundation for the theory of symmetries in quantum mechanics, for which he received the Nobel Prize in Physics in 1963. http://eugene-wigner.co.tv/ Here is the key experiment that led Wigner to his Nobel Prize winning work on quantum symmetries: Eugene Wigner Excerpt: To express this basic experience in a more direct way: the world does not have a privileged center, there is no absolute rest, preferred direction, unique origin of calendar time, even left and right seem to be rather symmetric. The interference of electrons, photons, neutrons has indicated that the state of a particle can be described by a vector possessing a certain number of components. As the observer is replaced by another observer (working elsewhere, looking at a different direction, using another clock, perhaps being left-handed), the state of the very same particle is described by another vector, obtained from the previous vector by multiplying it with a matrix. This matrix transfers from one observer to another. http://www.reak.bme.hu/Wigner_Course/WignerBio/wb1.htm ========================= more detailed explanation of exactly how observer-centric symmetries was discovered: -- When I returned to Berlin, the excellent crystallographer Weissenberg asked me to study: why is it that in a crystal the atoms like to sit in a symmetry plane or symmetry axis. After a short time of thinking I understood: being on the symmetry axis ensures that the derivatives of the potential energy vanish in two directions perpendicular to the symmetry axis. (In case of a symmetry plane the derivative of the potential energy vanishes in one direction.) This is how I became interested in the role of symmetries in quantum mechanics. I spent the holidays -- Christmastime and summertime -- in Hungary, in Budapest and in Alsógöd, on the shore of the Danube. There I wrote the book on "Group Theory and its Application to the Quantum Mechanics of Atomic Spectra." [To the author 1983.] -- The intrusion of group theory into quantum mechanics was not received with applause. Wolfgang Pauli called the idea Gruppenpest. Albert Einstein and Erwin Schrödinger also expressed their uneasiness. Max Born and Max von Laue were more encouraging. John von Neumann and Leo Szilard enthusiastically encouraged Wigner's efforts. It was worth to do so: these efforts later resulted in a Nobel Prize. If an experiment is repeated elsewhere in another laboratory under similar conditions, it will give identical result. The experiment today yields the very same result as it yielded yesterday. If we turn the whole equipment by 30 degrees, it will not influence the result. The outcome depends neither on the location and timing of the experiment, nor on the spacial orientation of the equipment. Even speed (e.g. that of the Earth) does not influence the way the laws of Nature work. To express this basic experience in a more direct way: the world does not have a privileged center, there is no absolute rest, preferred direction, unique origin of calendar time, even left and right seem to be rather symmetric. The interference of electrons, photons, neutrons has indicated that the state of a particle can be described by a vector. possessing a certain number of components. As the observer is replaced by another observer (working elsewhere, looking at a different direction, using an other clock, perhaps being lefthanded), the state of the very same particle is described by another vector, obtained from the previous vector by multiplying it with a matrix. This matrix transfers from one observer to another. http://www.reak.bme.hu/Wigner_Course/WignerBio/wb1.htm bornagain77
As to the 'philosophical enigma' that Denton displays, as was noted earlier Denton leaned towards naturalism in 'Destiny', yet in this following video, at the 6:00 minute mark, he clearly leans towards Theism; No Beneficial Mutations - Not By Chance - Evolution: Theory In Crisis - Denton - Gitt - Spetner http://www.youtube.com/watch?v=rLjpblGyFyI bornagain77
Correction: (MY SCREEN NAME) @yahoo.com One more comment: 1 + e ^ (i*pi) = 0 Considered the most elegant equation, partly because it equates to zero. Well speaking of contrivances, the equation would seem to be a little less elegant in its more compact form: e ^(i*pi) = -1 or the similar e ^(-i*pi) = -1 but the contrivance does make for wonderful commentary, I admit. groovamos
Guys I'm thinking this fits in with a current project of mine. I'm going to propose that much in mathematics is contrived. I think in many cases, the contrivances are more like tools forcing open a treasure chest, because what lies at the end is often a jewel of elegant knowledge, but the tools are not always so elegant in some respects. Actually this is part of a dilemma I'm experiencing, as I have written a thesis that I need reviewed. I have taken 50 pages to arrive at an alternative expression for sounding signal time-of-arrival (TOA) resolution. This useful in radar, sonar, ultrasonic flow measurement and imaging. In this paper I generate 3 interlocking contrivances, each involving limits which must be taken simultaneously. The starting point for the endeavor is the Shannon-Hartley theorem which has apparently not been until now successfully used for this purpose. What I've come up with is pretty audacious and possible wrong, but it allows for arbitrary noise spectrum instead of the white noise power quantity as used in the standard expression. We have at least one mathematician posting here and there are others around like R. Marks and W. Dembski. Can you guys hook me up with a reviewer? I pay generously. And I contribute (more than posts) to this site, so please email with comments or suggestions at (MY SCREEN NAME) dot yahoo.com groovamos
of note; This may be of interest to the 'We Are Stardust' video of Denton: As well as the universe having a transcendent beginning, thus confirming the Theistic postulation in Genesis 1:1, the following recent discovery of a 'Dark Age' for the early universe uncannily matches up with the Bible passage in Job 38:4-11. For the first 400,000 years of our universe’s expansion, the universe was a seething maelstrom of energy and sub-atomic particles. This maelstrom was so hot, that sub-atomic particles trying to form into atoms would have been blasted apart instantly, and so dense, light could not travel more than a short distance before being absorbed. If you could somehow live long enough to look around in such conditions, you would see nothing but brilliant white light in all directions. When the cosmos was about 400,000 years old, it had cooled to about the temperature of the surface of the sun. The last light from the "Big Bang" shone forth at that time. This "light" is still detectable today as the Cosmic Background Radiation. This 400,000 year old “baby” universe entered into a period of darkness. When the dark age of the universe began, the cosmos was a formless sea of particles. By the time the dark age ended, a couple of hundred million years later, the universe lit up again by the light of some of the galaxies and stars that had been formed during this dark era. It was during the dark age of the universe that the heavier chemical elements necessary for life, carbon, oxygen, nitrogen and most of the rest, were first forged, by nuclear fusion inside the stars, out of the universe’s primordial hydrogen and helium. It was also during this dark period of the universe the great structures of the modern universe were first forged. Super-clusters, of thousands of galaxies stretching across millions of light years, had their foundations laid in the dark age of the universe. During this time the infamous “missing dark matter”, was exerting more gravity in some areas than in other areas; drawing in hydrogen and helium gas, causing the formation of mega-stars. These mega-stars were massive, weighing in at 20 to more than 100 times the mass of the sun. The crushing pressure at their cores made them burn through their fuel in only a million years. It was here, in these short lived mega-stars under these crushing pressures, the chemical elements necessary for life were first forged out of the hydrogen and helium. The reason astronomers can’t see the light from these first mega-stars, during this dark era of the universe’s early history, is because the mega-stars were shrouded in thick clouds of hydrogen and helium gas. These thick clouds prevented the mega-stars from spreading their light through the cosmos as they forged the elements necessary for future life to exist on earth. After about 200 million years, the end of the dark age came to the cosmos. The universe was finally expansive enough to allow the dispersion of the thick hydrogen and helium “clouds”. With the continued expansion of the universe, the light, of normal stars and dwarf galaxies, was finally able to shine through the thick clouds of hydrogen and helium gas, bringing the dark age to a close. (How The Stars Were Born - Michael D. Lemonick) http://www.time.com/time/magazine/article/0,9171,1376229-2,00.html Job 38:4-11 “Where were you when I laid the foundations of the earth? Tell me if you have understanding. Who determined its measurements? Surely you know! Or who stretched a line upon it? To what were its foundations fastened? Or who laid its cornerstone, When the morning stars sang together, and all the sons of God shouted for joy? Or who shut in the sea with doors, when it burst forth and issued from the womb; When I made the clouds its garment, and thick darkness its swaddling band; When I fixed my limit for it, and set bars and doors; When I said, ‘This far you may come but no farther, and here your proud waves must stop!" History of The Universe Timeline- Graph Image http://www.astronomynotes.com/cosmolgy/CMB_Timeline.jpg As a sidelight to this, every class of elements that exists on the periodic table of elements is necessary for complex carbon-based life to exist on earth. The three most abundant elements in the human body, Oxygen, Carbon, Hydrogen, 'just so happen' to be the most abundant elements in the universe, save for helium which is inert. A truly amazing coincidence that strongly implies 'the universe had us in mind all along'. Even uranium the last naturally occurring element on the period table of elements is necessary for life. The heat generated by the decay of uranium is necessary to keep a molten core in the earth for an extended period of time, which is necessary for the magnetic field surrounding the earth, which in turn protects organic life from the harmful charged particles of the sun. As well, uranium decay provides the heat for tectonic activity and the turnover of the earth's crustal rocks, which is necessary to keep a proper mixture of minerals and nutrients available on the surface of the earth, which is necessary for long term life on earth. (Denton; Nature's Destiny). These following articles and videos give a bit deeper insight into how elements were formed, and the crucial role that individual elements play in allowing life: The Elements: Forged in Stars - video http://www.metacafe.com/watch/4003861 Michael Denton - We Are Stardust - Uncanny Balance Of The Elements - Fred Hoyle Atheist to Deist/Theist - video http://www.metacafe.com/watch/4003877 The Role of Elements in Life Processes http://www.mii.org/periodic/LifeElement.php Periodic Table - Interactive web page for each element http://www.mii.org/periodic/MiiPeriodicChart.htm bornagain77
It is also interesting to note that 'higher dimensional' mathematics had to be developed before Einstein could elucidate General Relativity, or even before Quantum Mechanics could be elucidated; The Mathematics Of Higher Dimensionality – Gauss & Riemann – video http://www.metacafe.com/watch/6199520/ As well please note how the higher dimensional, 'spiritual', framework of Quantum Mechanics compares to the higher dimensional, 'material', framework of General relativity 3D to 4D shift - Carl Sagan - video with notes Excerpt from Notes: The state-space of quantum mechanics is an infinite-dimensional function space. Some physical theories are also by nature high-dimensional, such as the 4-dimensional general relativity. http://www.youtube.com/watch?v=9VS1mwEV9wA bornagain77
My views on Math were profoundly shaped by the sheer impact of the Euler expression.
The Euler expression is an expression in the complex number system. There is a history of describing the complex numbers as an invented number system. Neil Rickert
The mystery doesn't stop there, this following video shows how pi and e are found in Genesis 1:1 and John 1:1 Euler's Identity - God Created Mathematics - video http://www.metacafe.com/watch/4003905 This following website, and video, has the complete working out of the math of Pi and e in the Bible, in the Hebrew and Greek languages respectively, for Genesis 1:1 and John 1:1: http://www.biblemaths.com/pag03_pie/ Fascinating Bible code – Pi and natural log – Amazing – video (of note: correct exponent for base of Nat Log found in John 1:1 is 10^40, not 10^65 as stated in the video) http://www.youtube.com/watch?v=Wg9LiiSVae =============== Nature by Numbers - Cristobal Vila - Fibonacci's Number - beautiful video http://vimeo.com/14018303 This following video is very interesting for revealing how difficult it was for mathematicians to actually 'prove' that mathematics was 'true' in the first place: Georg Cantor - The Mathematics Of Infinity - video http://www.metacafe.com/watch/4572335 bornagain77
Although Neil is unimpressed that math should be found to govern reality, instead of being merely a useful human construct within reality,,, I found this following experiment very persuasive for establishing the logical, the 'Logos', foundation of reality: Wheeler's Classic Delayed Choice Experiment: Excerpt: Now, for many billions of years the photon is in transit in region 3. Yet we can choose (many billions of years later) which experimental set up to employ – the single wide-focus, or the two narrowly focused instruments. We have chosen whether to know which side of the galaxy the photon passed by (by choosing whether to use the two-telescope set up or not, which are the instruments that would give us the information about which side of the galaxy the photon passed). We have delayed this choice until a time long after the particles "have passed by one side of the galaxy, or the other side of the galaxy, or both sides of the galaxy," so to speak. Yet, it seems paradoxically that our later choice of whether to obtain this information determines which side of the galaxy the light passed, so to speak, billions of years ago. So it seems that time has nothing to do with effects of quantum mechanics. And, indeed, the original thought experiment was not based on any analysis of how particles evolve and behave over time – it was based on the mathematics. This is what the mathematics predicted for a result, and this is exactly the result obtained in the laboratory. http://www.bottomlayer.com/bottom/basic_delayed_choice.htm And to add to kairos's view of the 'profoundness' of Euler's; 0 = 1 + e ^(i*pi) — Euler Believe it or not, the five most important numbers in mathematics are tied together, through the complex domain in Euler's number, And that points, ever so subtly but strongly, to a world of reality beyond the immediately physical. Many people resist the implications, but there the compass needle points to a transcendent reality that governs our 3D 'physical' reality. God by the Numbers - Connecting the constants Excerpt: The final number comes from theoretical mathematics. It is Euler's (pronounced "Oiler's") number: e*pi*i. This number is equal to -1, so when the formula is written e*pi*i+1 = 0, it connects the five most important constants in mathematics (e, pi, i, 0, and 1) along with three of the most important mathematical operations (addition, multiplication, and exponentiation). These five constants symbolize the four major branches of classical mathematics: arithmetic, represented by 1 and 0; algebra, by i; geometry, by pi; and analysis, by e, the base of the natural log. e*pi*i+1 = 0 has been called "the most famous of all formulas," because, as one textbook says, "It appeals equally to the mystic, the scientist, the philosopher, and the mathematician." http://www.christianitytoday.com/ct/2006/march/26.44.html?start=3 (of note; Euler's Number (equation) is more properly called Euler's Identity in math circles.) Moreover Euler’s Identity, rather than just being the most enigmatic equation in math, finds striking correlation to how our 3D reality is actually structured,,, The following picture, Bible verse, and video are very interesting since, with the discovery of the Cosmic Microwave Background Radiation (CMBR), the universe is found to actually be a circular sphere which 'coincidentally' corresponds to the circle of pi within Euler's identity: Picture of CMBR https://webspace.utexas.edu/reyesr/SolarSystem/cmbr.jpg Proverbs 8:26-27 While as yet He had not made the earth or the fields, or the primeval dust of the world. When He prepared the heavens, I was there, when He drew a circle on the face of the deep, The Known Universe by AMNH – video - (please note the 'centrality' of the Earth in the universe in the video) http://www.youtube.com/watch?v=17jymDn0W6U The flatness of the ‘entire’ universe, which 'coincidentally' corresponds to the diameter of pi in Euler’s identity, is found on this following site; (of note this flatness of the universe is an extremely finely tuned condition for the universe that could have, in reality, been a multitude of different values than 'flat'): Did the Universe Hyperinflate? – Hugh Ross – April 2010 Excerpt: Perfect geometric flatness is where the space-time surface of the universe exhibits zero curvature (see figure 3). Two meaningful measurements of the universe’s curvature parameter, ½k, exist. Analysis of the 5-year database from WMAP establishes that -0.0170 < ½k < 0.0068.4 Weak gravitational lensing of distant quasars by intervening galaxies places -0.031 < ½k < 0.009.5 Both measurements confirm the universe indeed manifests zero or very close to zero geometric curvature,,, http://www.reasons.org/did-universe-hyperinflate This following video shows that the universe also has a primary characteristic of expanding/growing equally in all places,, which 'coincidentally' strongly corresponds to e in Euler's identity. e is the constant used in all sorts of equations of math for finding what the true rates of growth and decay are for any given problem trying to find as such: Every 3D Place Is Center In This Universe – 4D space/time – video http://www.metacafe.com/watch/3991873/ Towards the end of the following video, Michael Denton speaks of the square root of negative 1 being necessary to understand the foundational quantum behavior of this universe. The square root of -1 is 'coincidentally' found in Euler's identity: Michael Denton – Mathematical Truths Are Transcendent And Beautiful – Square root of -1 is built into the fabric of reality – video http://www.metacafe.com/watch/4003918" I find it extremely strange that the enigmatic Euler's identity would find such striking correlation to reality. In pi we have correlation to the 'sphere of the universe' as revealed by the Cosmic Background radiation, as well pi correlates to the finely-tuned 'geometric flatness' within the 'sphere of the universe' that has now been found. In e we have the fundamental constant that is used for ascertaining exponential growth in math that strongly correlates to the fact that space-time is 'expanding/growing equally' in all places of the universe. In the square root of -1 we have what is termed a 'imaginary number', which was first proposed to help solve equations like x2+ 1 = 0 back in the 17th century, yet now, as Michael Denton pointed out in the preceding video, it is found that the square root of -1 is required to explain the behavior of quantum mechanics in this universe. The correlation of Euler's identity, to the foundational characteristics of how this universe is constructed and operates, points overwhelmingly to a transcendent Intelligence, with a capital I, which created this universe! It should also be noted that these universal constants, pi,e, and square root -1, were at first thought by many to be completely transcendent of any material basis, to find that these transcendent constants of Euler's identity in fact 'govern' material reality, in such a foundational way, should be enough to send shivers down any mathematicians spine. Here is a very well done video, showing the stringent 'mathematical proofs' of Euler's Identity: Euler's identity - video http://www.youtube.com/watch?v=zApx1UlkpNs bornagain77
NR: My views on Math were profoundly shaped by the sheer impact of the Euler expression. Namely: 1 + e^(i*pi) = 0 Here is something that should be as artificial as they come, and yet bingo, it and its context tie together all sorts of real world things. mathematics is working because it is capturing a profound aspect of reality. That's why it is so astonishingly reliable and powerful. The number of times a mathematical oddity has turned out to aptly capture reality in physics -- try out the objection to Young that on their waves notions there should be a dot of light in the middle of the shadow of a tiny ball [then discovered!], is beyond explanation otherwise. That needs to be faced and taken seriously. As far as I am concerned, this is yet another compass-needle pointing to Logic himself -- aka the Logos Himself [cf UD WACs here] -- being behind what we see. And if you want to dismiss this, well let me observe that I find it amusing how over the past few weeks we have been seeing here at UD multiverse ideas a million as an alternative, as though this is not a slipping over the border from science into metaphysics. (Cf the remarks here.) Once metaphysics is on the table, censorship has to be off the table, and every serious view sits there as of right, not sufferance. On pain of censorship. GEM of TKI kairosfocus
Humans (through their intelligence) have invented the tools that allow him/her to express/reveal the underlying mathematical logic of God's creation. deric davidson
Neil Rickert: "There is somewhat of a debate within mathematics, as to what is discovered and what is invented." === Of course. It's called battle of worldviews. After years of reading discussions from both sides, I don't think reality usually enters into it. The Fogma from both sides get in the way. Eocene
As a mathematician, I might perhaps know something about the question. There is somewhat of a debate within mathematics, as to what is discovered and what is invented. Most mathematicians would concede that there is a lot of invention. Kronecker famously said "God gave us the natural numbers; all else is the work of man." Those mathematicians who say that mathematics is discovered, are likely to say that it is discovered in a platonic world of ideal forms. Most do not say that it is found in reality. There is rather wide agreement that mathematics is not about reality. Neil Rickert
Neil, Thanks for that profound insight. You've convinced me with your irrefutable logic and argumentation. GilDodgen
As Denton explains, humans did not invent math, it is built into the nature of things and was discovered.
Denton is quite wrong about that. Neil Rickert
Would that be the same Micheal Denton who wrote in Nature's Destiny:
"it is important to emphasize at the outset that the argument presented here is entirely consistent with the basic naturalistic assumption of modern science - that the cosmos is a seamless unity which can be comprehended ultimately in its entirety by human reason and in which all phenomena, including life and evolution and the origin of man, are ultimately explicable in terms of natural processes. This is an assumption which is entirely opposed to that of the so-called "special creationist school". According to special creationism, living organisms are not natural forms, whose origin and design were built into the laws of nature from the beginning, but rather contingent forms analogous in essence to human artifacts, the result of a series of supernatural acts, involving the suspension of natural law. Contrary to the creationist position, the whole argument presented here is critically dependent on the presumption of the unbroken continuity of the organic world - that is, on the reality of organic evolution and on the presumption that all living organisms on earth are natural forms in the profoundest sense of the word, no less natural than salt crystals, atoms, waterfalls, or galaxies." (page xvii-xviii).
Petrushka
Yes Gil, Denton is a bit of a philosophical enigma, yet none-the-less his insights are really interesting. I enjoyed reading both of his books very much, and bought several copies of 'Destiny' to pass out to friends. His comments on mathematics, which you have listed, are very interesting as well and made me dig a bit deeper. Here are a few notes I've collected after listening to the 'math' video you listed; The Underlying Mathematical Foundation Of The Universe -Walter Bradley - video http://www.metacafe.com/watch/4491491 The Five Foundational Equations of the Universe and Brief Descriptions of Each: http://docs.google.com/Doc?docid=0AYmaSrBPNEmGZGM4ejY3d3pfNDdnc3E4bmhkZg&hl=en How can it be that mathematics, being after all a product of human thought which is independent of experience, is so admirably appropriate to the objects of reality? — Albert Einstein “… if nature is really structured with a mathematical language and mathematics invented by man can manage to understand it, this demonstrates something extraordinary. The objective structure of the universe and the intellectual structure of the human being coincide.” – Pope Benedict XVI "The reason that mathematics is so effective in capturing, expressing, and modeling what we call empirical reality is that there is a ontological correspondence between the two - I would go so far as to say that they are the same thing." Richard Sternberg - Pg. 8 How My Views On Evolution Evolved Mathematics is the language with which God has written the universe. Galileo Galilei The Unreasonable Effectiveness of Mathematics in the Natural Sciences - Eugene Wigner Excerpt: The miracle of the appropriateness of the language of mathematics for the formulation of the laws of physics is a wonderful gift which we neither understand nor deserve. We should be grateful for it and hope that it will remain valid in future research and that it will extend, for better or for worse, to our pleasure, even though perhaps also to our bafflement, to wide branches of learning. http://www.dartmouth.edu/~matc/MathDrama/reading/Wigner.html The following site lists the unchanging constants of the universe: Systematic Search for Expressions of Dimensionless Constants using the NIST database of Physical Constants Excerpt: The National Institute of Standards and Technology lists 325 constants on their website as ‘Fundamental Physical Constants’. Among the 325 physical constants listed, 79 are unitless in nature (usually by defining a ratio). This produces a list of 246 physical constants with some unit dependence. These 246 physical constants can be further grouped into a smaller set when expressed in standard SI base units.,,, http://www.mit.edu/~mi22295/constants/constants.html THE GOD OF THE MATHEMATICIANS - DAVID P. GOLDMAN - August 2010 Excerpt: we cannot construct an ontology that makes God dispensable. Secularists can dismiss this as a mere exercise within predefined rules of the game of mathematical logic, but that is sour grapes, for it was the secular side that hoped to substitute logic for God in the first place. Gödel's critique of the continuum hypothesis has the same implication as his incompleteness theorems: Mathematics never will create the sort of closed system that sorts reality into neat boxes. http://www.faqs.org/periodicals/201008/2080027241.html ‘In chapter 2, I talk at some length on the Schroedinger Equation which is called the fundamental equation of chemistry. It’s the equation that governs the behavior of the basic atomic particles subject to the basic forces of physics. This equation is a partial differential equation with a complex valued solution. By complex valued I don’t mean complicated, I mean involving solutions that are complex numbers, a+b^i, which is extraordinary that the governing equation, basic equation, of physics, of chemistry, is a partial differential equation with complex valued solutions. There is absolutely no reason why the basic particles should obey such a equation that I can think of except that it results in elements and chemical compounds with extremely rich and useful chemical properties. In fact I don’t think anyone familiar with quantum mechanics would believe that we’re ever going to find a reason why it should obey such an equation, they just do! So we have this basic, really elegant mathematical equation, partial differential equation, which is my field of expertise, that governs the most basic particles of nature and there is absolutely no reason why, anyone knows of, why it does, it just does. British physicist Sir James Jeans said “From the intrinsic evidence of His creation, the great architect of the universe begins to appear as a pure mathematician”, so God is a mathematician to’. Absolute Truth - Frank Turek - video http://www.youtube.com/watch?v=VaGNRP6Q-6Q ======================= Matthew 24:35 Heaven and earth will pass away, but my words will never pass away. The Word - Sara Groves - music video http://www.youtube.com/watch?v=0ofE-GZ8zTU bornagain77

Leave a Reply