Uncommon Descent Serving The Intelligent Design Community

RVB8 and the refusal to mark the difference between description and invention

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

. . . (of the concept, functionally specific, complex organisation and associated information, FSCO/I)


Sometimes, a longstanding objector here at UD — such as RVB8 — inadvertently reveals just how weak the objections to the design inference are by persistently clinging to long since cogently answered objections. This phenomenon of ideology triumphing over evident reality is worth highlighting as a headlined post illustrating darwinist rhetorical stratagems and habits.

Here is RVB8 in a comment in the current Steve Fuller thread:

RVB8, 36: >> for ID or Creationism, I can get the information direct from the creators of the terminology. Dembski for Specified Complexity, Kairos for his invention of FSCO/I, and Behe for Irreducible Complexity.>>

As it seems necessary to set a pronunciation, the acronym FSCO/I shall henceforth be pronounced “fish-koi” (where happily, koi are produced by artificial selection, a form of ID too often misused as a proxy for the alleged powers of culling out by differential reproductive success in the wild)

For a long time, he and others of like ilk have tried to suggest that as I have championed the acrostic summary FSCO/I, the concept I am pointing to is a dubious novelty that has not been tested through peer review or the like and can be safely set aside. In fact, it is simply acknowledging that specified complexity is both organisational and informational, and that in many contexts it is specified in the context of requisites of function through multiple coupled parts. Text such as in this post shows a simple form of such a structure, S-T-R-I-N-G-S.

Where of course, memorably, Crick classically pointed out to his son Michael on March 19, 1953 as follows, regarding DNA as text:

Crick’s letter

Subsequently, that code was elucidated (here in the mRNA, transcribed form):

The Genetic code uses three-letter codons to specify the sequence of AA’s in proteins and specifying start/stop, and using six bits per AA

Likewise a process flow network is an expression of FSCO/I, e.g. an oil refinery:

Petroleum refinery block diagram illustrating FSCO/I in a process-flow system

This case is much simpler than the elucidated biochemistry process flow metabolic reaction network of the living cell:

I have also often illustrated FSCO/I in the form of functional organisation through a drawing of an ABU 6500 C3 reel (which I safely presume came about through using AutoCAD or the like):

All of this is of course very directly similar to something like protein synthesis [top left in the cell’s biochem outline], which involves both text strings and functionally specific highly complex organisation:

Protein Synthesis (HT: Wiki Media)

In short, FSCO/I is real, relevant and patently descriptive, both of the technological world and the biological world. This demands an adequate causal explanation, and the only serious explanation on the table that is empirically warranted is, design.

As the text of this post illustrates, and as the text of objector comments to come will further inadvertently illustrate.

Now, I responded at no 37, as follows:

KF, 37: >>Unfortunately, your choice of speaking in terms of “invention” of FSCO/I speaks volumes on your now regrettably habitual refusal to acknowledge phenomena that are right in front of you. As in, a descriptive label acknowledges a phenomenon, it does not invent it.

Doubtless [and on long track record], you think that is a clever way to dismiss something you don’t wish to consider.

This pattern makes your rhetoric into a case in point of the sociological, ideological reaction to the design inference on tested sign. So, I now respond, by way of addressing a case of a problem of sustained unresponsiveness to evidence.

However, it only reveals that you are being selectively hyperskeptical and dismissive through the fallacy of the closed, ideologised, indoctrinated, hostile mind.

I suggest you need to think again.

As a start, look at your own comment, which is text. To wit, a s-t-r-i-n-g of 1943 ASCII characters, at 7 bits per character, indicating a config space of 2^[7 * 1943) possibilities. That is, a space with 2.037*10^4094 cells.

The atomic and temporal resources of our whole observed cosmos, running at 1 search per each of 10^80 atoms, at 10^12 – 10^14 searches per s [a fast chem reaction rate] for 10^17 s [time since big bang, approx.] could not search more than 10^111 cells, a negligibly small fraction. That is, the config space search challenge is real, there is not enough resource to search more than a negligibly small fraction of the haystack blindly. (and the notion sometimes put, of somehow having a golden search runs into the fact that searches are subsets, so search for a golden search comes from the power set of the direct config space, of order here 2^[10^4094]. That is, it is exponentially harder.)

How then did your text string come to be? By a much more powerful means: you as an intelligent and knowledgeable agent exerted intelligently directed configuration to compose a text in English.

That is why, routinely, when you see or I see text of significant size in English, we confidently and rightly infer to design.

As a simple extension, a 3-d object such as an Abu 6500 C3 fishing reel is describable, in terms of bit strings in a description language, so functional organisation is reducible to an informational equivalent. Discussion on strings is WLOG.

In terms of the living cell, we can simply point to the copious algorithmic TEXT in DNA, which directly fits with the textual search challenge issue. There is no empirically warranted blind chance and mechanical necessity mechanism that can plausibly account for it. We have every epistemic and inductive reasoning right to see that the FSCO/I in the cell is best explained as a result of design.

That twerdun, which comes before whodunit.

As for, oh it’s some readily scorned IDiot on a blog, I suggest you would do better to ponder this from Stephen Meyer:

The central argument of my book [= Signature in the Cell] is that intelligent design—the activity of a conscious and rational deliberative agent—best explains the origin of the information necessary to produce the first living cell. I argue this because of two things that we know from our uniform and repeated experience, which following Charles Darwin I take to be the basis of all scientific reasoning about the past. First, intelligent agents have demonstrated the capacity to produce large amounts of functionally specified information (especially in a digital form). Second, no undirected chemical process has demonstrated this power. Hence, intelligent design provides the best—most causally adequate—explanation for the origin of the information necessary to produce the first life from simpler non-living chemicals. In other words, intelligent design is the only explanation that cites a cause known to have the capacity to produce the key effect in question . . . . In order to [[scientifically refute this inductive conclusion] Falk would need to show that some undirected material cause has [[empirically] demonstrated the power to produce functional biological information apart from the guidance or activity a designing mind. Neither Falk, nor anyone working in origin-of-life biology, has succeeded in doing this . . . .

The central problem facing origin-of-life researchers is neither the synthesis of pre-biotic building blocks (which Sutherland’s work addresses) or even the synthesis of a self-replicating RNA molecule (the plausibility of which Joyce and Tracey’s work seeks to establish, albeit unsuccessfully . . . [[Meyer gives details in the linked page]). Instead, the fundamental problem is getting the chemical building blocks to arrange themselves into the large information-bearing molecules (whether DNA or RNA) . . . .

For nearly sixty years origin-of-life researchers have attempted to use pre-biotic simulation experiments to find a plausible pathway by which life might have arisen from simpler non-living chemicals, thereby providing support for chemical evolutionary theory. While these experiments have occasionally yielded interesting insights about the conditions under which certain reactions will or won’t produce the various small molecule constituents of larger bio-macromolecules, they have shed no light on how the information in these larger macromolecules (particularly in DNA and RNA) could have arisen. Nor should this be surprising in light of what we have long known about the chemical structure of DNA and RNA. As I show in Signature in the Cell, the chemical structures of DNA and RNA allow them to store information precisely because chemical affinities between their smaller molecular subunits do not determine the specific arrangements of the bases in the DNA and RNA molecules. Instead, the same type of chemical bond (an N-glycosidic bond) forms between the backbone and each one of the four bases, allowing any one of the bases to attach at any site along the backbone, in turn allowing an innumerable variety of different sequences. This chemical indeterminacy is precisely what permits DNA and RNA to function as information carriers. It also dooms attempts to account for the origin of the information—the precise sequencing of the bases—in these molecules as the result of deterministic chemical interactions . . . .

[[W]e now have a wealth of experience showing that what I call specified or functional information (especially if encoded in digital form) does not arise from purely physical or chemical antecedents [[–> i.e. by blind, undirected forces of chance and necessity]. Indeed, the ribozyme engineering and pre-biotic simulation experiments that Professor Falk commends to my attention actually lend additional inductive support to this generalization. On the other hand, we do know of a cause—a type of cause—that has demonstrated the power to produce functionally-specified information. That cause is intelligence or conscious rational deliberation. As the pioneering information theorist Henry Quastler once observed, “the creation of information is habitually associated with conscious activity.” And, of course, he was right. Whenever we find information—whether embedded in a radio signal, carved in a stone monument, written in a book or etched on a magnetic disc—and we trace it back to its source, invariably we come to mind, not merely a material process. Thus, the discovery of functionally specified, digitally encoded information along the spine of DNA, provides compelling positive evidence of the activity of a prior designing intelligence. This conclusion is not based upon what we don’t know. It is based upon what we do know from our uniform experience about the cause and effect structure of the world—specifically, what we know about what does, and does not, have the power to produce large amounts of specified information . . . .

[[In conclusion,] it needs to be noted that the [[now commonly asserted and imposed limiting rule on scientific knowledge, the] principle of methodological naturalism [[ that scientific explanations may only infer to “natural[[istic] causes”] is an arbitrary philosophical assumption, not a principle that can be established or justified by scientific observation itself. Others of us, having long ago seen the pattern in pre-biotic simulation experiments, to say nothing of the clear testimony of thousands of years of human experience, have decided to move on. We see in the information-rich structure of life a clear indicator of intelligent activity and have begun to investigate living systems accordingly. If, by Professor Falk’s definition, that makes us philosophers rather than scientists, then so be it. But I suspect that the shoe is now, instead, firmly on the other foot. [[Meyer, Stephen C: Response to Darrel Falk’s Review of Signature in the Cell, SITC web site, 2009. (Emphases and parentheses added.)]

Let me focus attention on the highlighted:

First, intelligent agents have demonstrated the capacity to produce large amounts of functionally specified information (especially in a digital form). Second, no undirected chemical process has demonstrated this power. Hence, intelligent design provides the best—most causally adequate—explanation for the origin of the information necessary to produce the first life from simpler non-living chemicals.

The only difference between this and what I have highlighted through the acronym FSCO/I, is that functionally specific organisation is similarly reducible to an informational string and is in this sense equivalent to it. Where, that is hardly news, AutoCAD has reigned supreme as an engineers design tool for decades now. Going back to 1973, Orgel in his early work on specified complexity, wrote:

. . . In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity . . . .

[HT, Mung, fr. p. 190 & 196:] These vague idea can be made more precise by introducing the idea of information. Roughly speaking, the information content of a structure is the minimum number of instructions needed to specify the structure. [–> this is of course equivalent to the string of yes/no questions required to specify the relevant “wiring diagram” for the set of functional states, T, in the much larger space of possible clumped or scattered configurations, W, as Dembski would go on to define in NFL in 2002, also cf here, here and here (with here on self-moved agents as designing causes).] One can see intuitively that many instructions are needed to specify a complex structure. [–> so if the q’s to be answered are Y/N, the chain length is an information measure that indicates complexity in bits . . . ] On the other hand a simple repeating structure can be specified in rather few instructions. [–> do once and repeat over and over in a loop . . . ] Complex but random structures, by definition, need hardly be specified at all . . . . Paley was right to emphasize the need for special explanations of the existence of objects with high information content, for they cannot be formed in nonevolutionary, inorganic processes. [The Origins of Life (John Wiley, 1973), p. 189, p. 190, p. 196.]

So, the concept of reducing functional organisation to a description on a string of y/n structured questions — a bit string in some description language — is hardly news, nor is it something I came up with. Where obviously Orgel is speaking to FUNCTIONAL specificity, so that is not new either.

Likewise, search spaces or config spaces is a simple reflection of the phase space concept of statistical thermodynamics.

Dembski’s remarks are also significant, here from NFL:

p. 148:“The great myth of contemporary evolutionary biology is that the information needed to explain complex biological structures can be purchased without intelligence. My aim throughout this book is to dispel that myth . . . . Eigen and his colleagues must have something else in mind besides information simpliciter when they describe the origin of information as the central problem of biology.

I submit that what they have in mind is specified complexity, or what equivalently we have been calling in this Chapter Complex Specified information or CSI . . . .

Biological specification always refers to function. An organism is a functional system comprising many functional subsystems. . . . In virtue of their function [[a living organism’s subsystems] embody patterns that are objectively given and can be identified independently of the systems that embody them. Hence these systems are specified in the sense required by the complexity-specificity criterion . . . the specification can be cashed out in any number of ways [[through observing the requisites of functional organisation within the cell, or in organs and tissues or at the level of the organism as a whole. Dembski cites:

Wouters, p. 148: “globally in terms of the viability of whole organisms,”

Behe, p. 148: “minimal function of biochemical systems,”

Dawkins, pp. 148 – 9: “Complicated things have some quality, specifiable in advance, that is highly unlikely to have been acquired by ran-| dom chance alone. In the case of living things, the quality that is specified in advance is . . . the ability to propagate genes in reproduction.”

On p. 149, he roughly cites Orgel’s famous remark from 1973, which exactly cited reads:

In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity . . .

And, p. 149, he highlights Paul Davis in The Fifth Miracle: “Living organisms are mysterious not for their complexity per se, but for their tightly specified complexity.”] . . .”

p. 144: [[Specified complexity can be more formally defined:] “. . . since a universal probability bound of 1 [[chance] in 10^150 corresponds to a universal complexity bound of 500 bits of information, [[the cluster] (T, E) constitutes CSI because T [[ effectively the target hot zone in the field of possibilities] subsumes E [[ effectively the observed event from that field], T is detachable from E, and and T measures at least 500 bits of information . . . ”

So, the problem of refusal to attend to readily available, evidence or even evidence put in front of objectors to design theory is significant and clear.

What it in the end reflects as a case of clinging to fallacies and myths in the teeth of correction for years on end, is the weakness of the case being made against design by its persistent objectors.

Which is itself highly significant.>>

Now, let us discuss, duly noting the highlighted and emphasised. END

Comments
CR, your theory has been thoroughly criticized. It presents nothing substantive about the most unique and important aspect of the system its being applied to. It doesn't even mention it. Even your response fails to address the issue. cheersUpright BiPed
March 8, 2017
March
03
Mar
8
08
2017
10:16 PM
10
10
16
PM
PDT
@UB
The physical independence created by such an organization, happens to be exactly what is physically required to describe the system in a transcribable memory and interpret the description. And the only other place such an physical system can be identified is in written language and mathematics. That’s one part of the inference to design in biology. It’s an completely empirical and unapologetic inference to design, identified right at the point where biology begins.
Again, It's unclear how the theory about physics and it's relationship with information is not relevant to an argument based on the relationship between physics and information.critical rationalist
March 8, 2017
March
03
Mar
8
08
2017
09:16 PM
9
09
16
PM
PDT
@UB
As for your post at 134, I have no idea what it would even mean to say that “the design of organisms exist in these physical constraints”. Likewise, I have no idea what it means to say that “physical interpretations are built into the laws of physics”.
First, if you have no idea what that would mean, then how do you know it's not relevant to the topic? Second, if you read the paper I referenced, that was addressed at length, through contrast with no-design laws of physics.
In the biosphere self-reproduction is approximated to various accuracies. There are many poor approximations to self-reproducers - e.g., crude replicators such as crystals, short RNA strands and autocatalytic cycles involved in the origin of life [11]. Being so inaccurate, they do not require any further explanation under no-design laws: they do not have appearance of design, any more than simple inorganic catalysts do.(4) In contrast, actual gene-replication is an impressively accurate physical trans- formation, albeit imperfect. But even more striking is that living cells can self-reproduce to high accuracy in a variety of environments, reconstruct- ing the vehicle afresh, under the control of the genes, in all the intricate details necessary for gene replication. This is prima facie problematic un- der no-design laws: how can those processes be so accurate, without their design being encoded in the laws of physics? This is why some physicists - notably, Wigner and Bohm, [12], [13] - have even claimed that accurate self-reproduction of an organism with the appearance of design requires the laws of motion to be “tailored” for the purpose – i.e., they must contain its design [12].
and...
No-design laws can be expressed exactly in constructor theory, too. First, I define “generic resources” as substrates that exist in effectively unlimited numbers. In the context of early life on this planet, these include only elementary entities such as photons, water, simple catalysts and small organic molecules. It has sometimes been proposed that the very existence of laws of nature constitutes a form of “design” in them, [23]. In contrast, for present purposes no-design laws are those that do not contain the design of biological adaptations - i.e., of what the theory of evolution aims at explaining: for the problem here is whether the physical processes assumed by the theory of evolution are possible under such laws. Consequently I require no-design laws to satisfy these conditions: - Generic resources can only perform a few tasks, only to a finite accuracy, called elementary tasks. These are physically simple and contain no design (of biological adaptations). Familiar examples are spontaneous, approximately self-correcting chemical reactions, such as molecules “snapping” into a catalysts regardless of any original small mismatch. - No good approximation to a constructor for tasks that are non-elementary can ever be produced by generic resources acting on generic resources only. Under no-design laws, the generic resources and the interactions available in nature are allowed to contain only those approximate constructors that unequivocally do not have the design of those very adaptations the theory of evolution is required to explain.(7) Examples of laws that would violate these conditions are: laws including accurate constructors, such as bacteria, in the generic resources; laws with “copy-like” interactions, designed to copy the configuration of atoms of a bacterium onto generic resources; laws permitting spontaneous generation of a bacterium directly from generic resources only; laws permitting only mutations that are systematically directed to improvements in a certain environment. The exact characterisation of no-design laws is a departure from the pre- vailing conception - which can at most characterise them as being typical, according to some measure, in the space of all laws. The latter is unsuitable for present purposes, as the choice of the measure is highly arbitrary. Moreover, it is misleading: some laws that may be untypical under some natural measure - such as the actual laws of physics, because of, say, local interactions - need not contain the design of biological adaptations, thus qualifying as no-design in this context. Furthermore, laws with the design of biological adaptations are a proper subset of those laws that in the con- text of anthropic fine tuning have been called “bio-friendly”: those having features - such as local interactions, or special values of the fine-structure constant, etc. - which, if slightly changed, would cause life as we know it to be impossible. These features, though necessary to life, are not specific to life: their variation would make impossible many other phenomena, non specifically related to biological adaptations. The problem can now be restated in constructor theory, as: are accurate self-reproducers and replicators possible under no-design laws? I shall prove that an accurate self-reproducer is possible under no-design laws, provided they allow information to be physically instantiated; from this it will follow that an accurate replicator is possible too, provided that it be contained in a self-reproducer, (sections 3.1 - 3.3). I will assume that the raw materials of self-reproduction (N in (1), (2)) comprises generic substrates only. This over-stringent assumption rules out the realistic situation that they contain other organisms; but it is acceptable for present purposes because if accurate self-reproduction and replication are allowed under these over-stringent requirements, so are they when the generic resources contain also living organisms. Before presenting the argument, I shall recall the basics of the constructor theory of information (section 2.1). This is crucial to give an exact characterisation of what it means for the laws of physics to allow information to be physically instantiated.
critical rationalist
March 8, 2017
March
03
Mar
8
08
2017
09:12 PM
9
09
12
PM
PDT
@UB You wrote:
CR, your post at 133 doesn’t directly address any of my empirical criticism of your position, so I don’t feel particularly compelled to respond to it.
Information theory and it's relation to physics isn't relevant despite the top of discussion being the information in DNA?
Also, the information in DNA (the topic of this conversation) doesn’t need to be “brought into fundamental physics” by “constructor theory”; it has been well-understood in terms of fundamental physics for a great number of years. Additionally, I don’t know why you introduced Shannon to the conversation.
I don't feel particularly compelled to respond to a theory of information merely defined as the one "everyone knows".critical rationalist
March 8, 2017
March
03
Mar
8
08
2017
08:50 PM
8
08
50
PM
PDT
By the way CR, --- "A more intriguing and relevant question is how does a lawfully determined system enable the specification of unlimited variation in an environment that allows no alternatives to those laws." The physical independence created by such an organization, happens to be exactly what is physically required to describe the system in a transcribable memory and interpret the description. And the only other place such an physical system can be identified is in written language and mathematics. That's one part of the inference to design in biology. It's an completely empirical and unapologetic inference to design, identified right at the point where biology begins. The reason I am telling you this is that it might give you an opportunity (with a clear explanation in hand) to go read up on the system, verify it for yourself, and then re-read your articles.Upright BiPed
March 8, 2017
March
03
Mar
8
08
2017
06:16 PM
6
06
16
PM
PDT
CR, your post at 133 doesn't directly address any of my empirical criticism of your position, so I don't feel particularly compelled to respond to it. As for your post at 134, I have no idea what it would even mean to say that "the design of organisms exist in these physical constraints". Likewise, I have no idea what it means to say that "physical interpretations are built into the laws of physics". The arrangement of codons in a DNA sequence establishes what pattern of amino acids will appear in a polypeptide, and the collective arrangements of the aaRS specify which amino acids will appear in that pattern. These things have been well known for half a century. End of mystery. The notion that we need to show that biological organization is "possible under no-design laws" seems rather meaningless. A more intriguing and relevant question is how does a lawfully determined system enable the specification of unlimited variation in an environment that allows no alternatives to those laws. That question has already been answered.Upright BiPed
March 8, 2017
March
03
Mar
8
08
2017
05:33 PM
5
05
33
PM
PDT
critical rationalist: This theory you are talking about seems to be largely OT for the current thread. Please feel free to put together a brief exposition of the theory and how it is relevant to Darwinian evolution and/or design, and I'd be happy to elevate it to a new thread so we can discuss in more detail.Eric Anderson
March 8, 2017
March
03
Mar
8
08
2017
04:58 PM
4
04
58
PM
PDT
Just because variation is random to any problem to solve, how does that stand in opposition to it being completely random? Why can variation not be random to any problem to solve as well as being completely random?
Variation in the process of evolution is not completely random. This is because it's a repeating process of variation and selection, not just variation on its own. Just as in the growth of human knowledge, guesses are not completely random. This because conjectures take into account background knowledge that itself came from earlier conjectures and criticisms. This happens consciously and subconsciously. Ever find yourself about to suggest a solution, but then say "never mind", since it won't work? That's a conjectured solution that just slipped though subconscious criticism. A vast number of solutions don't make it that far. And then there is instinct, which itself based on variation and selection, such as a foal that can walk after just being born. IOW, all knowledge grows though some form of conjecture and criticism. It's a universal theory that brings unification - just like gravity unified the motions of apples and planets. However, you seem to be suggesting that we can't make any progress on the subject of knowledge, since unification is impossible.critical rationalist
March 8, 2017
March
03
Mar
8
08
2017
04:22 PM
4
04
22
PM
PDT
@UB I'm trying to understand what you mean here. You wrote:
What you call “knowledge” is actually representations encoded in a material medium. Like all representations, they require interpretation via physical constraint. As an example, the representations contained in DNA (codons) are interpreted by a set of contingent physical constraints (aaRS) in order to produce functional proteins. This reflects the Peircean logic that representation and interpretation are necessarily complimentary realities. This logic was followed by Turing; followed by von Neumann; and is demonstrated in every instance of recorded information ever known to exist. Not only was it predicted by logic and reason, but it has been demonstrated in physics, and in the structural architecture of the system itself.
Are you suggesting that the design of organisms already existed in these physical constraints? If so, this sounds like the opposite of "no-design laws" mentioned in the paper, where the physical interpretations are somehow built into the laws of physics. From the paper....
In the biosphere self-reproduction is approximated to various accuracies. There are many poor approximations to self-reproducers - e.g., crude replicators such as crystals, short RNA strands and autocatalytic cycles involved in the origin of life [11]. Being so inaccurate, they do not require any further explanation under no-design laws: they do not have appearance of design, any more than simple inorganic catalysts do.(4) In contrast, actual gene-replication is an impressively accurate physical trans- formation, albeit imperfect. But even more striking is that living cells can self-reproduce to high accuracy in a variety of environments, reconstructing the vehicle afresh, under the control of the genes, in all the intricate details necessary for gene replication. This is prima facie problematic under no-design laws: how can those processes be so accurate, without their design being encoded in the laws of physics? This is why some physicists - notably, Wigner and Bohm, [12], [13] - have even claimed that accurate self-reproduction of an organism with the appearance of design requires the laws of motion to be “tailored” for the purpose – i.e., they must contain its design [12].
critical rationalist
March 8, 2017
March
03
Mar
8
08
2017
03:52 PM
3
03
52
PM
PDT
@UB You wrote:
You were making statements about the physical requirements of the first self-replicating cells on earth....
However, the primary requirement for the first heterogeneous self-replicating cells on earth was their capacity to produce a description of themselves in a transcribable memory and be able to successfully interpret the description.
I am looking for no such [theory of information that is physical] Also, the information in DNA (the topic of this conversation) doesn’t need to be “brought into fundamental physics” by “constructor theory”; it has been well-understood in terms of fundamental physics for a great number of years. Additionally, I don’t know why you introduced Shannon to the conversation.
UB, If not Shannon's theory then what physical theory that we have supposedly known for a great number of years are you referring to? And how does it account for information in the context of quantum computation? Please be specific.critical rationalist
March 8, 2017
March
03
Mar
8
08
2017
03:36 PM
3
03
36
PM
PDT
CR:
Again, variation in evolution is random to a specific problem to solve, not completely random.
And, again: Just because variation is random to any problem to solve, how does that stand in opposition to it being completely random? Why can variation not be random to any problem to solve as well as being completely random?
This is because proteins in evolutionary theory do not arise all at once from random variations.
What? That doesn't make the variations themselves non-random. In a game of Yahtzee, I can select dice toward some goal (which, as Origenes points out so clearly above is NOT what natural selection does), but that doesn't preclude in any way the fact that each roll of the dice (the variation) is still utterly random. For it not to be random, the dice would have to be loaded somehow. Are you suggesting that variation is loaded? What is influencing the variation itself such that it is not random?Phinehas
March 8, 2017
March
03
Mar
8
08
2017
02:37 PM
2
02
37
PM
PDT
CR, #120 You were making statements about the physical requirements of the first self-replicating cells on earth. You claimed that these first cells did not need “great precision”, and you based this conclusion on the idea that they didn't have to compete with better replicators than themselves. However, the primary requirement for the first heterogeneous self-replicating cells on earth was their capacity to produce a description of themselves in a transcribable memory and be able to successfully interpret the description. My question to you was intended to gauge how you were taking these requirements into consideration, which clearly, you were not doing.
I’m going to assume your’e looking for a theory of information that is physical…
There’s no need for this assumption; I am looking for no such thing. Also, the information in DNA (the topic of this conversation) doesn’t need to be “brought into fundamental physics” by “constructor theory”; it has been well-understood in terms of fundamental physics for a great number of years. Additionally, I don’t know why you introduced Shannon to the conversation.
If by “constraints” you do not mean what is possible and not possible, then please clarify.
(Attempting to speak to you using your map of the road) What you call “knowledge” is actually representations encoded in a material medium. Like all representations, they require interpretation via physical constraint. As an example, the representations contained in DNA (codons) are interpreted by a set of contingent physical constraints (aaRS) in order to produce functional proteins. This reflects the Peircean logic that representation and interpretation are necessarily complimentary realities. This logic was followed by Turing; followed by von Neumann; and is demonstrated in every instance of recorded information ever known to exist. Not only was it predicted by logic and reason, but it has been demonstrated in physics, and in the structural architecture of the system itself. So, circling back to the top of the issue, in order to establish the life cycle of the heterogeneous cell, you have to have enough of these organized representations and constraints to describe the system in a transcribable medium and be able to successfully interpret the description. It is only the coordination of these two sets of objects that enables the system to persist. Thus, statements about the origin of the living cell that either obscure or ignore these fundamental requirements are basically useless to the conversation.Upright BiPed
March 8, 2017
March
03
Mar
8
08
2017
02:05 PM
2
02
05
PM
PDT
critical rationalist,
CR: Again, variation in evolution is random to a specific problem to solve, not completely random.
Mutation is completely random period.
CR: This is because proteins in evolutionary theory do not arise all at once from random variations.
Even if that is true, which it isn't, it doesn't change anything: we start with a protein and next some completely random change is going to happen. Now, proteins and everything else are caused by sheer dumb luck, according to a proper understanding of evolutionary theory. Natural selection does nothing to help and makes matters worse. Sheer dumb luck is all you have to offer: Given that natural selection is a process of elimination, existent organisms are the ones that got away. Instead of being created by ‘natural elimination’, exactly the opposite is true: they are “untouched” by ‘natural elimination’. Existent organisms are those organisms on which natural selection has precisely no bearing whatsoever. They are the undiluted products of chance.
CHANCE ALONE, is at the source of every innovation, of all creation in the biosphere. Pure chance, absolutely free but blind, is at the very root of the stupendous edifice of creation." [Jacques Monod]
CR: Natural selection plays the role of criticism in evolution.
Natural selection elimination makes evolution perform worse than a blind search — see #125.Origenes
March 8, 2017
March
03
Mar
8
08
2017
01:34 PM
1
01
34
PM
PDT
Or, rationality, along with recognizing a goal, is what saves our problem-solving from being completely random…like variation is.
Again, variation in evolution is random to a specific problem to solve, not completely random. This is because proteins in evolutionary theory do not arise all at once from random variations. Natural selection plays the role of criticism in evolution. So, it's not completely random, either. My key point is, in both cases, we start out with something that isn't guaranteed to be true. Theories are tested by observations, not derived from them. People can create useful rules of thumb and accidentally solve problems without recognizing them as such at the time or having that goal in mind. See my concrete example above. An educated guess is still a guess, none the less.critical rationalist
March 8, 2017
March
03
Mar
8
08
2017
12:53 PM
12
12
53
PM
PDT
CR:
In the case of people, rationality is an additional means by which we can criticize and eliminate them.
Or, rationality, along with recognizing a goal, is what saves our problem-solving from being completely random...like variation is.Phinehas
March 8, 2017
March
03
Mar
8
08
2017
11:18 AM
11
11
18
AM
PDT
@Origenes
I would say that creative solutions are “in there” for us to observe via our internal senses.
All solutions to all problems are inside us and we can simply observe them with our "internal senses"? So, how does that work? Please be specific.
In short, rationality cannot be compared to random mutations.
I'm not comparing them. Rationality is an approach to how we criticize our theories. As you said, it's part of our "process of elimination.", not a source of our conjectured ideas. Again, I'm suggesting people start out with a problem to solve, conjecture theories about how the work works that solve those problems, criticize them, which includes empirical tests, and then discard those we find in error. Nor am I suggesting all knowledge is the same. While people can create both explanatory and non-explanatory knowledge, only people can create explanatory theories. To elaborate, imagine I’ve been shipwrecked on a deserted island and I have partial amnesia due to the wreck. I remember that coconuts are edible so climb a tree to pick them. While attempting to pick a coconut, one falls, lands of a rock and splits open. Note that I did not intend for the coconut to fall, let alone plan for it to fall because I guessed coconuts that fall on rocks might crack open. The coconut falling was random *in respect to a problem I hadn’t yet even tried to solve*. Yet it ended solving a problem regardless. Furthermore, due to my amnesia, I’ve hypothetically forgotten what I know about physics, including mass, inertia, etc. Specifically, I lack an explanation as to why the coconut landing on the rock causes it to open. As such, my knowledge of how to open coconuts is merely a useful rule of thumb, which is limited in reach. For example, in the absence of an explanation, I would likely assume I'd need to collect coconuts picked from other trees, carry them to this same tree, climb it, then drop them on the same rocks to open them. However, explanatory knowledge has significant reach. Specifically, if my explanatory knowledge of physics, including inertia, mass, etc. returned, I could use that explanation to strike coconut with any similar sized rock, rather than vice versa. Furthermore, I could exchange the rock with another object with significant mass, such as an anchor and open objects other than coconuts, such as shells, use this knowledge to protect myself from attacking wildlife, etc. So, explanatory knowledge only comes from intentional conjectures made by people and has significant reach. Non-explanatory knowledge (created by variation that is random to specific problems to solve, and selection) represent unintentional conjectures, which have limited reach. None of that was gained from my experience. While there are important differences, neither variations in evolution or theories conjectured by people come with any guarantee they are correct. In the case of people, rationality is an additional means by which we can criticize and eliminate them.critical rationalist
March 8, 2017
March
03
Mar
8
08
2017
10:45 AM
10
10
45
AM
PDT
CR:
We start out with a problem to solve, conjecture solutions to those problems then criticize them and discard errors we find. Creative solutions are not “out there” for us to observe via our senses any more than creative solutions to biological problems faced by organisms. They start out as guesses which are criticized.
Right. So we start out with a goal. That goal helps us develop methods or heuristics for determining whether we are approaching the goal or moving further away from it. These are refined as we continue our search for a solution. Thus, our search for a solution is not blind at all. Variation doesn't have a goal. There is no 'solution' for it to find because there is no 'problem' in the first place. It cannot define any methods or heuristics for determining whether anything will get it closer to a target it doesn't have. It's merely taking pot-shots in the dark. Thus, its search for any solution whatsoever is totally and completely blind. Even if you can imagine lots of theoretical targets out there, variation has no concept of a near miss and no process for refining its aim. It continues to fire randomly and either hits a target or not.
In evolution, variation is random to any problem to solve, as opposed to being completely random.
Just because variation is random to any problem to solve, how does that stand in opposition to it being completely random? Why can variation not be random to any problem to solve as well as being completely random? Where and how is variation not completely random? What is directing it?Phinehas
March 8, 2017
March
03
Mar
8
08
2017
09:50 AM
9
09
50
AM
PDT
critical rationalist
CR: We start out with a problem to solve, conjecture solutions to those problems.
We do this with a goal in mind and understanding.
CR: Creative solutions are not “out there” for us to observe via our senses any more than creative solutions to biological problems faced by organisms.
I would say that creative solutions are “in there” for us to observe via our internal senses. In short, rationality cannot be compared to random mutations.
CR: … creative solutions to biological problems faced by organisms. They start out as guesses which are criticized.
Random “guesses”, based on … neither plan nor understanding. And ‘criticized’ by two things: 1. The filter of existence — is the organism still viable? 2. Random environmental change. Now ‘natural selection’ has to do with (2), which means that perfectly viable organisms are eliminated on a whim. Think about it: ‘natural selection’ removes perfectly viable organisms. Random mutations hit the jackpot and produce a miracle — a perfectly viable creature — and next natural selection elimination steps in .... Organisms that could have unique solutions to the problems life was trying to solve. Or organisms that could be on the brink of evolving a spectacular new feature. Eliminated, because of temporary draught, a severe winter, an epidemic or whatever. That’s clearly beyond ‘criticizing’, natural selection is a hindrance to evolution. Evolution would be better off without it. Natural selection makes evolution perform worse than a blind search.Origenes
March 8, 2017
March
03
Mar
8
08
2017
09:03 AM
9
09
03
AM
PDT
Eric, DNA contains knowledge. What do I mean by that? Knowledge is information that plays a causal role in being retained when embedded in a storage medium. As pointed out above, this includes knowledge found in brains, books and genes. Nor does it require a knowing subject. From Popper's book Objective Knowledge..
"Let me repeat one of my standard arguments for the (more or less) independent existence of world 3. I consider two thought experiments: Experiment (1). All our machines and tools are destroyed, and all our subjective learning, including our subjective knowledge of machines and tools, and how to use them. But libraries and our capacity to learn from them survive. Clearly, after much suffering, our world may get going again. Experiment (2). As before, machines and tools are destroyed, and our subjective learning, including our subjective knowledge of machines and tools, and how to use them. But this time, all libraries are destroyed also, so that our capacity to learn from books becomes useless."
Knowledge: Subjective Versus Objective, page 59critical rationalist
March 8, 2017
March
03
Mar
8
08
2017
08:33 AM
8
08
33
AM
PDT
@Origenes#87
‘Natural selection’ is, in fact, a process of elimination. Elimination only explains why some organisms go out of existence, but does not explain why organisms come into existence. Darwin’s theory promotes the false belief that elimination is creative.
This is what I mean by assuming we know nothing about how human designers design things. We start out with a problem to solve, conjecture solutions to those problems then criticize them and discard errors we find. Creative solutions are not "out there" for us to observe via our senses any more than creative solutions to biological problems faced by organisms. They start out as guesses which are criticized. In evolution, variation is random to any problem to solve, as opposed to being completely random. So, what we have is a universal theory for the growth of knowledge. This includes knowledge found in brains, books and even the genome.critical rationalist
March 8, 2017
March
03
Mar
8
08
2017
08:24 AM
8
08
24
AM
PDT
Origenes, good points. Yes, there are multiple levels of functional integration and organization before we get to a complete organism, particularly a large, multi-cellular organism. At a very basic, foundational stage I'm just trying to get some people to acknowledge and articulate the important difference between DNA and a rock at this point! One step at a time. :)Eric Anderson
March 8, 2017
March
03
Mar
8
08
2017
08:08 AM
8
08
08
AM
PDT
timothya:
Not some, but all physical objects contain what you describe as information.
You are perilously close to denying objective reality, which will render any further discussion pointless and prevent you from even comprehending the issue KF is raising, much less being able to engage in a useful discussion about it. I echo KF's kind request in his first paragraph @119 for you to clarify. If the question about whether there is information in physical objects generally is too nuanced and confusing, we can approach the issue from an easier angle. Let's give you one more try, back to the basics: Do you or do you not acknowledge that there is a difference between the information contained in your genome and the information you think is contained in "all physical objects"? And what is that difference? Please do answer logically and honestly. Don't worry. An honest answer to this question by itself won't mean you have lost the debate about functional specified complexity. It won't mean that intelligent design is true. It won't mean that Darwinism and the materialist creation story are false. But it will help us assess whether you even understand one of the most basic and fundamental issues on the table.Eric Anderson
March 8, 2017
March
03
Mar
8
08
2017
07:37 AM
7
07
37
AM
PDT
@UB
“please summarize the number of different physical constraints required to interpret the “recipe” and the number of representations within that “recipe” that are required to describe the construction of the constraints?”
Unfortunately, indicating you have "read the second paper" has not clarified what you mean by "constraints". As such, I'm going to assume you're looking for a theory of information that is physical, despite the fact that information is media independent, and does not have Shannon's circularity in defining what is distinguishable. Is that correct? If so, that's why I posted the link to the first paper. Specifically, it brings information into fundamental physics using constructor theory - what must be possible and impossible. Also, Shannon's theory is lacking because it is not compatible with information in the context of quantum mechanics and computation. IOW, If by "constraints", you do not mean what must be possible and not possible, then please clarify.critical rationalist
March 8, 2017
March
03
Mar
8
08
2017
07:30 AM
7
07
30
AM
PDT
TA, kindly define information as you use it, and provide some backdrop for justifying that usage. Explain to us how your usage is not a case of so broadening a concept as to render it useless, which is a rhetorical tactic that is often driven by ideological considerations. KF PS: As a starter, observe this on entropy and information onward to FSCO/I, as clipped in my always linked note, which is key backdrop:
we may average the information per symbol in [a] communication system thusly (giving in terms of -H to make the additive relationships clearer): - H = p1 log p1 + p2 log p2 + . . . + pn log pn or, H = - SUM [pi log pi] . . . Eqn 5 H, the average information per symbol transmitted [usually, measured as: bits/symbol], is often termed the Entropy; first, historically, because it resembles one of the expressions for entropy in statistical thermodynamics. As Connor notes: "it is often referred to as the entropy of the source." [p.81, emphasis added.] Also, while this is a somewhat controversial view in Physics, as is briefly discussed in Appendix 1below, there is in fact an informational interpretation of thermodynamics that shows that informational and thermodynamic entropy can be linked conceptually as well as in mere mathematical form. Though somewhat controversial even in quite recent years, this is becoming more broadly accepted in physics and information theory, as Wikipedia now discusses [as at April 2011] in its article on Informational Entropy (aka Shannon Information, cf also here):
At an everyday practical level the links between information entropy and thermodynamic entropy are not close. Physicists and chemists are apt to be more interested in changes in entropy as a system spontaneously evolves away from its initial conditions, in accordance with the second law of thermodynamics, rather than an unchanging probability distribution. And, as the numerical smallness of Boltzmann's constant kB indicates, the changes in S / kB for even minute amounts of substances in chemical and physical processes represent amounts of entropy which are so large as to be right off the scale compared to anything seen in data compression or signal processing. But, at a multidisciplinary level, connections can be made between thermodynamic and informational entropy, although it took many years in the development of the theories of statistical mechanics and information theory to make the relationship fully apparent. In fact, in the view of Jaynes (1957), thermodynamics should be seen as an application of Shannon's information theory: the thermodynamic entropy is interpreted as being an estimate of the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a description solely in terms of the macroscopic variables of classical thermodynamics. For example, adding heat to a system increases its thermodynamic entropy because it increases the number of possible microscopic states that it could be in, thus making any complete state description longer. (See article: maximum entropy thermodynamics.[Also,another article remarks: >>in the words of G. N. Lewis writing about chemical entropy in 1930, "Gain in entropy always means loss of information, and nothing more" . . . in the discrete case using base two logarithms, the reduced Gibbs entropy is equal to the minimum number of yes/no questions that need to be answered in order to fully specify the microstate, given that we know the macrostate.>>]) Maxwell's demon can (hypothetically) reduce the thermodynamic entropy of a system by using information about the states of individual molecules; but, as Landauer (from 1961) and co-workers have shown, to function the demon himself must increase thermodynamic entropy in the process, by at least the amount of Shannon information he proposes to first acquire and store; and so the total entropy does not decrease (which resolves the paradox).
Summarising Harry Robertson's Statistical Thermophysics (Prentice-Hall International, 1993) -- excerpting desperately and adding emphases and explanatory comments, we can see, perhaps, that this should not be so surprising after all. (In effect, since we do not possess detailed knowledge of the states of the vary large number of microscopic particles of thermal systems [typically ~ 10^20 to 10^26; a mole of substance containing ~ 6.023*10^23 particles; i.e. the Avogadro Number], we can only view them in terms of those gross averages we term thermodynamic variables [pressure, temperature, etc], and so we cannot take advantage of knowledge of such individual particle states that would give us a richer harvest of work, etc.) For, as he astutely observes on pp. vii - viii: . . . the standard assertion that molecular chaos exists is nothing more than a poorly disguised admission of ignorance, or lack of detailed information about the dynamic state of a system . . . . If I am able to perceive order, I may be able to use it to extract work from the system, but if I am unaware of internal correlations, I cannot use them for macroscopic dynamical purposes. On this basis, I shall distinguish heat from work, and thermal energy from other forms . . . And, in more details, (pp. 3 - 6, 7, 36, cf Appendix 1 below for a more detailed development of thermodynamics issues and their tie-in with the inference to design; also see recent ArXiv papers by Duncan and Samura here and here): . . . It has long been recognized that the assignment of probabilities to a set represents information, and that some probability sets represent more information than others . . . if one of the probabilities say p2 is unity and therefore the others are zero, then we know that the outcome of the experiment . . . will give [event] y2. Thus we have complete information . . . if we have no basis . . . for believing that event yi is more or less likely than any other [we] have the least possible information about the outcome of the experiment . . . . A remarkably simple and clear analysis by Shannon [1948] has provided us with a quantitative measure of the uncertainty, or missing pertinent information, inherent in a set of probabilities [NB: i.e. a probability different from 1 or 0 should be seen as, in part, an index of ignorance] . . . . [deriving informational entropy, cf. discussions here, here, here, here and here; also Sarfati's discussion of debates and the issue of open systems here . . . ] H({pi}) = - C [SUM over i] pi*ln pi, [. . . "my" Eqn 6] [where [SUM over i] pi = 1, and we can define also parameters alpha and beta such that: (1) pi = e^-[alpha + beta*yi]; (2) exp [alpha] = [SUM over i](exp - beta*yi) = Z [Z being in effect the partition function across microstates, the "Holy Grail" of statistical thermodynamics]. . . . [H], called the information entropy, . . . correspond[s] to the thermodynamic entropy [i.e. s, where also it was shown by Boltzmann that s = k ln w], with C = k, the Boltzmann constant, and yi an energy level, usually ei, while [BETA] becomes 1/kT, with T the thermodynamic temperature . . . A thermodynamic system is characterized by a microscopic structure that is not observed in detail . . . We attempt to develop a theoretical description of the macroscopic properties in terms of its underlying microscopic properties, which are not precisely known. We attempt to assign probabilities to the various microscopic states . . . based on a few . . . macroscopic observations that can be related to averages of microscopic parameters. Evidently the problem that we attempt to solve in statistical thermophysics is exactly the one just treated in terms of information theory. It should not be surprising, then, that the uncertainty of information theory becomes a thermodynamic variable when used in proper context . . . . Jayne's [summary rebuttal to a typical objection] is ". . . The entropy of a thermodynamic system is a measure of the degree of ignorance of a person whose sole knowledge about its microstate consists of the values of the macroscopic quantities . . . which define its thermodynamic state. This is a perfectly 'objective' quantity . . . it is a function of [those variables] and does not depend on anybody's personality. There is no reason why it cannot be measured in the laboratory." . . . . [pp. 3 - 6, 7, 36; replacing Robertson's use of S for Informational Entropy with the more standard H.] As is discussed briefly in Appendix 1, Thaxton, Bradley and Olsen [TBO], following Brillouin et al, in the 1984 foundational work for the modern Design Theory, The Mystery of Life's Origins [TMLO], exploit this information-entropy link, through the idea of moving from a random to a known microscopic configuration in the creation of the bio-functional polymers of life, and then -- again following Brillouin -- identify a quantitative information metric for the information of polymer molecules. For, in moving from a random to a functional molecule, we have in effect an objective, observable increment in information about the molecule. This leads to energy constraints, thence to a calculable concentration of such molecules in suggested, generously "plausible" primordial "soups." In effect, so unfavourable is the resulting thermodynamic balance, that the concentrations of the individual functional molecules in such a prebiotic soup are arguably so small as to be negligibly different from zero on a planet-wide scale. By many orders of magnitude, we don't get to even one molecule each of the required polymers per planet, much less bringing them together in the required proximity for them to work together as the molecular machinery of life. The linked chapter gives the details. More modern analyses [e.g. Trevors and Abel, here and here], however, tend to speak directly in terms of information and probabilities rather than the more arcane world of classical and statistical thermodynamics, so let us now return to that focus; in particular addressing information in its functional sense, as the third step in this preliminary analysis. As the third major step, we now turn to information technology, communication systems and computers, which provides a vital clarifying side-light from another view on how complex, specified information functions in information processing systems: [In the context of computers] information is data -- i.e. digital representations of raw events, facts, numbers and letters, values of variables, etc. -- that have been put together in ways suitable for storing in special data structures [strings of characters, lists, tables, "trees" etc], and for processing and output in ways that are useful [i.e. functional]. . . . Information is distinguished from [a] data: raw events, signals, states etc represented digitally, and [b] knowledge: information that has been so verified that we can reasonably be warranted, in believing it to be true. [GEM, UWI FD12A Sci Med and Tech in Society Tutorial Note 7a, Nov 2005.] That is, we have now made a step beyond mere capacity to carry or convey information, to the function fulfilled by meaningful -- intelligible, difference making -- strings of symbols. In effect, we here introduce into the concept, "information," the meaningfulness, functionality (and indeed, perhaps even purposefulness) of messages -- the fact that they make a difference to the operation and/or structure of systems using such messages, thus to outcomes; thence, to relative or absolute success or failure of information-using systems in given environments. And, such outcome-affecting functionality is of course the underlying reason/explanation for the use of information in systems. [Cf. the recent peer-reviewed, scientific discussions here, and here by Abel and Trevors, in the context of the molecular nanotechnology of life.] Let us note as well that since in general analogue signals can be digitised [i.e. by some form of analogue-digital conversion], the discussion thus far is quite general in force. So, taking these three main points together, we can now see how information is conceptually and quantitatively defined, how it can be measured in bits, and how it is used in information processing systems; i.e., how it becomes functional. In short, we can now understand that: Functionally Specific, Complex Information [FSCI] is a characteristic of complicated messages that function in systems to help them practically solve problems faced by the systems in their environments. Also, in cases where we directly and independently know the source of such FSCI (and its accompanying functional organisation) it is, as a general rule, created by purposeful, organising intelligent agents. So, on empirical observation based induction, FSCI is a reliable sign of such design, e.g. the text of this web page, and billions of others all across the Internet. (Those who object to this, therefore face the burden of showing empirically that such FSCI does in fact -- on observation -- arise from blind chance and/or mechanical necessity without intelligent direction, selection, intervention or purpose.) Indeed, this FSCI perspective lies at the foundation of information theory: (i) recognising signals as intentionally constructed messages transmitted in the face of the possibility of noise, (ii) where also, intelligently constructed signals have characteristics of purposeful specificity, controlled complexity and system- relevant functionality based on meaningful rules that distinguish them from meaningless noise; (iii) further noticing that signals exist in functioning generation- transfer and/or storage- destination systems that (iv) embrace co-ordinated transmitters, channels, receivers, sources and sinks. That this is broadly recognised as true, can be seen from a surprising source, Dawkins, who is reported to have said in his The Blind Watchmaker (1987), p. 8: Hitting upon the lucky number that opens the bank's safe [NB: cf. here the case in Brown's The Da Vinci Code] is the equivalent, in our analogy, of hurling scrap metal around at random and happening to assemble a Boeing 747. [NB: originally, this imagery is due to Sir Fred Hoyle, who used it to argue that life on earth bears characteristics that strongly suggest design. His suggestion: panspermia -- i.e. life drifted here, or else was planted here.] Of all the millions of unique and, with hindsight equally improbable, positions of the combination lock, only one opens the lock. Similarly, of all the millions of unique and, with hindsight equally improbable, arrangements of a heap of junk, only one (or very few) will fly. The uniqueness of the arrangement that flies, or that opens the safe, has nothing to do with hindsight. It is specified in advance. [Emphases and parenthetical note added, in tribute to the late Sir Fred Hoyle. (NB: This case also shows that we need not see boxes labelled "encoders/decoders" or "transmitters/receivers" and "channels" etc. for the model in Fig. 1 above to be applicable; i.e. the model is abstract rather than concrete: the critical issue is functional, complex information, not electronics.)] Here, we see how the significance of FSCI naturally appears in the context of considering the physically and logically possible but vastly improbable creation of a jumbo jet by chance. Instantly, we see that mere random chance acting in a context of blind natural forces is a most unlikely explanation, even though the statistical behaviour of matter under random forces cannot rule it strictly out. But it is so plainly vastly improbable, that, having seen the message -- a flyable jumbo jet -- we then make a fairly easy and highly confident inference to its most likely origin: i.e. it is an intelligently designed artifact.
kairosfocus
March 8, 2017
March
03
Mar
8
08
2017
03:02 AM
3
03
02
AM
PDT
Eric Anderson: What is the information about? What language or symbolic system is it represented in? … It is the ultimate chasm that must be crossed from inanimate matter to living systems.
Although I agree that the chasm you point out is deep and wide, I would like to suggest that the ‘ultimate chasm’ has to do with functional coherence at the level of an organism as a whole. Crossing the chasm of symbolism, so words, sentences and paragraphs can be formed, the question arises: what power makes it all into a coherent story? And beyond that: given that the story of an organism is ever-changing, incorporating a myriad of external and internal events, what power perpetuates the coherence, precisely for a life time? It is at the level of an organism as a whole that we see the true miracle of unity in life. Even ‘representative information’ falls short as an explanation.Origenes
March 8, 2017
March
03
Mar
8
08
2017
02:56 AM
2
02
56
AM
PDT
Erik Anderson: "There is a world of difference between the fact that we can describe physical objects using information, and the fact that some physical objects actually contain information." Not some, but all physical objects contain what you describe as information.timothya
March 8, 2017
March
03
Mar
8
08
2017
02:19 AM
2
02
19
AM
PDT
Armand Jacks: Thank you for your response. I'll pose a brief response and then we can continue tomorrow. You seem to be conflating complexity with information. You then mention that because a clump of dirt is "a mix of many elements, it is far more information rich than any crystal." Where is that information? What is the information about? What language or symbolic system is it represented in? ----- Ideally I would prefer to do a back-and-forth on each nuance, but realistically we don't have that kind of time, so I will cut to the chase: Physical objects -- whether a clump of inanimate dirt or a single mineral -- do not contain information by their mere existence. Certainly not in the sense relevant to the present debate or the origin of biological systems. Yes, we as intelligent beings can analyze physical objects using our intelligence and our tools of discovery. At that point we have produced information as a result of our intellectual effort. We can then describe our findings (the information we have produced) in some kind of symbolic language. This can then, as all symbolically-represented information can be, transmitted and translated. This is how information works. It is how it always works. There is a world of difference between the fact that we can describe physical objects using information, and the fact that some physical objects actually contain information. This is the issue that ultimately lies at the crux of origin of life studies. It is the fundamental issue that origin of life researchers are trying to grapple with. It is the ultimate chasm that must be crossed from inanimate matter to living systems. It is not the case that there is some gradient from a tiny bit of information in a mineral, to a bit more information in a clump of dirt, to a lot of information in DNA. Every molecule can be described using information. But only some molecules contain representative information. These are entirely different domains.Eric Anderson
March 7, 2017
March
03
Mar
7
07
2017
11:10 PM
11
11
10
PM
PDT
That's fine AJ. In your answer you mention "the spectroscopic information contained within crystals and within dirt", and say that one contains "far more" than the other. Not meaning to sound obtuse, but spectroscopic information would almost certainly be a representation created by a spectroscope, and not really contained in the material. Many people here would suggest that this type of information does indeed exist in a real physical sense, but it is the product of a measurement taken from the material, not contained within it. Given your background, I will assume that you appreciate the distinction between the measurement and the soil, and so I would simply ask you if the soil contains any of this kind of information? After all, this is the type of information contained within the cell, which is the actual topic of this conversation.Upright BiPed
March 7, 2017
March
03
Mar
7
07
2017
10:46 PM
10
10
46
PM
PDT
UB:
It seems to me you were asking someone to agree that there is a difference between the information contained in a pile of inanimate dirt and the information contained in Chalconatronite. Now you seem to be saying that you don’t actually know if any information is there.
I aplologize. I was just being a smart ass in my response to you. You did not deserve that. I could go into great detail about the spectroscopic information contained within crystals and within "dirt" but it wouldn't add much to the discussion. Suffice it to say, because your basic clump of dirt (soil, sediment) is a mix of many elements, it is far more information rich that any crystal, which is essentially a purified form of one or more elements in specific ratios. But, it is late at night, I am old, and I have forgotten what the original argument was about. Catch you on the flip side. Good night.Armand Jacks
March 7, 2017
March
03
Mar
7
07
2017
09:26 PM
9
09
26
PM
PDT
Eric, I am an analytical chemist by trade. More specifically, spectrometry. I can tell you with certainty that there is far more information stored in your basic lump of amorphous dirt than there is in any crystal. And, the more pure the crystal, the less information. So, given your argument, how does the assertion that the genome has more information than a lump of dirt support your argument? Whatever that argument is. There are huge differences in levels of information within the natural world.Armand Jacks
March 7, 2017
March
03
Mar
7
07
2017
09:15 PM
9
09
15
PM
PDT
1 2 3 4 5 6 8

Leave a Reply