Uncommon Descent Serving The Intelligent Design Community

RVB8 and the refusal to mark the difference between description and invention

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

. . . (of the concept, functionally specific, complex organisation and associated information, FSCO/I)


Sometimes, a longstanding objector here at UD — such as RVB8 — inadvertently reveals just how weak the objections to the design inference are by persistently clinging to long since cogently answered objections. This phenomenon of ideology triumphing over evident reality is worth highlighting as a headlined post illustrating darwinist rhetorical stratagems and habits.

Here is RVB8 in a comment in the current Steve Fuller thread:

RVB8, 36: >> for ID or Creationism, I can get the information direct from the creators of the terminology. Dembski for Specified Complexity, Kairos for his invention of FSCO/I, and Behe for Irreducible Complexity.>>

As it seems necessary to set a pronunciation, the acronym FSCO/I shall henceforth be pronounced “fish-koi” (where happily, koi are produced by artificial selection, a form of ID too often misused as a proxy for the alleged powers of culling out by differential reproductive success in the wild)

For a long time, he and others of like ilk have tried to suggest that as I have championed the acrostic summary FSCO/I, the concept I am pointing to is a dubious novelty that has not been tested through peer review or the like and can be safely set aside. In fact, it is simply acknowledging that specified complexity is both organisational and informational, and that in many contexts it is specified in the context of requisites of function through multiple coupled parts. Text such as in this post shows a simple form of such a structure, S-T-R-I-N-G-S.

Where of course, memorably, Crick classically pointed out to his son Michael on March 19, 1953 as follows, regarding DNA as text:

Crick’s letter

Subsequently, that code was elucidated (here in the mRNA, transcribed form):

The Genetic code uses three-letter codons to specify the sequence of AA’s in proteins and specifying start/stop, and using six bits per AA

Likewise a process flow network is an expression of FSCO/I, e.g. an oil refinery:

Petroleum refinery block diagram illustrating FSCO/I in a process-flow system

This case is much simpler than the elucidated biochemistry process flow metabolic reaction network of the living cell:

I have also often illustrated FSCO/I in the form of functional organisation through a drawing of an ABU 6500 C3 reel (which I safely presume came about through using AutoCAD or the like):

All of this is of course very directly similar to something like protein synthesis [top left in the cell’s biochem outline], which involves both text strings and functionally specific highly complex organisation:

Protein Synthesis (HT: Wiki Media)

In short, FSCO/I is real, relevant and patently descriptive, both of the technological world and the biological world. This demands an adequate causal explanation, and the only serious explanation on the table that is empirically warranted is, design.

As the text of this post illustrates, and as the text of objector comments to come will further inadvertently illustrate.

Now, I responded at no 37, as follows:

KF, 37: >>Unfortunately, your choice of speaking in terms of “invention” of FSCO/I speaks volumes on your now regrettably habitual refusal to acknowledge phenomena that are right in front of you. As in, a descriptive label acknowledges a phenomenon, it does not invent it.

Doubtless [and on long track record], you think that is a clever way to dismiss something you don’t wish to consider.

This pattern makes your rhetoric into a case in point of the sociological, ideological reaction to the design inference on tested sign. So, I now respond, by way of addressing a case of a problem of sustained unresponsiveness to evidence.

However, it only reveals that you are being selectively hyperskeptical and dismissive through the fallacy of the closed, ideologised, indoctrinated, hostile mind.

I suggest you need to think again.

As a start, look at your own comment, which is text. To wit, a s-t-r-i-n-g of 1943 ASCII characters, at 7 bits per character, indicating a config space of 2^[7 * 1943) possibilities. That is, a space with 2.037*10^4094 cells.

The atomic and temporal resources of our whole observed cosmos, running at 1 search per each of 10^80 atoms, at 10^12 – 10^14 searches per s [a fast chem reaction rate] for 10^17 s [time since big bang, approx.] could not search more than 10^111 cells, a negligibly small fraction. That is, the config space search challenge is real, there is not enough resource to search more than a negligibly small fraction of the haystack blindly. (and the notion sometimes put, of somehow having a golden search runs into the fact that searches are subsets, so search for a golden search comes from the power set of the direct config space, of order here 2^[10^4094]. That is, it is exponentially harder.)

How then did your text string come to be? By a much more powerful means: you as an intelligent and knowledgeable agent exerted intelligently directed configuration to compose a text in English.

That is why, routinely, when you see or I see text of significant size in English, we confidently and rightly infer to design.

As a simple extension, a 3-d object such as an Abu 6500 C3 fishing reel is describable, in terms of bit strings in a description language, so functional organisation is reducible to an informational equivalent. Discussion on strings is WLOG.

In terms of the living cell, we can simply point to the copious algorithmic TEXT in DNA, which directly fits with the textual search challenge issue. There is no empirically warranted blind chance and mechanical necessity mechanism that can plausibly account for it. We have every epistemic and inductive reasoning right to see that the FSCO/I in the cell is best explained as a result of design.

That twerdun, which comes before whodunit.

As for, oh it’s some readily scorned IDiot on a blog, I suggest you would do better to ponder this from Stephen Meyer:

The central argument of my book [= Signature in the Cell] is that intelligent design—the activity of a conscious and rational deliberative agent—best explains the origin of the information necessary to produce the first living cell. I argue this because of two things that we know from our uniform and repeated experience, which following Charles Darwin I take to be the basis of all scientific reasoning about the past. First, intelligent agents have demonstrated the capacity to produce large amounts of functionally specified information (especially in a digital form). Second, no undirected chemical process has demonstrated this power. Hence, intelligent design provides the best—most causally adequate—explanation for the origin of the information necessary to produce the first life from simpler non-living chemicals. In other words, intelligent design is the only explanation that cites a cause known to have the capacity to produce the key effect in question . . . . In order to [[scientifically refute this inductive conclusion] Falk would need to show that some undirected material cause has [[empirically] demonstrated the power to produce functional biological information apart from the guidance or activity a designing mind. Neither Falk, nor anyone working in origin-of-life biology, has succeeded in doing this . . . .

The central problem facing origin-of-life researchers is neither the synthesis of pre-biotic building blocks (which Sutherland’s work addresses) or even the synthesis of a self-replicating RNA molecule (the plausibility of which Joyce and Tracey’s work seeks to establish, albeit unsuccessfully . . . [[Meyer gives details in the linked page]). Instead, the fundamental problem is getting the chemical building blocks to arrange themselves into the large information-bearing molecules (whether DNA or RNA) . . . .

For nearly sixty years origin-of-life researchers have attempted to use pre-biotic simulation experiments to find a plausible pathway by which life might have arisen from simpler non-living chemicals, thereby providing support for chemical evolutionary theory. While these experiments have occasionally yielded interesting insights about the conditions under which certain reactions will or won’t produce the various small molecule constituents of larger bio-macromolecules, they have shed no light on how the information in these larger macromolecules (particularly in DNA and RNA) could have arisen. Nor should this be surprising in light of what we have long known about the chemical structure of DNA and RNA. As I show in Signature in the Cell, the chemical structures of DNA and RNA allow them to store information precisely because chemical affinities between their smaller molecular subunits do not determine the specific arrangements of the bases in the DNA and RNA molecules. Instead, the same type of chemical bond (an N-glycosidic bond) forms between the backbone and each one of the four bases, allowing any one of the bases to attach at any site along the backbone, in turn allowing an innumerable variety of different sequences. This chemical indeterminacy is precisely what permits DNA and RNA to function as information carriers. It also dooms attempts to account for the origin of the information—the precise sequencing of the bases—in these molecules as the result of deterministic chemical interactions . . . .

[[W]e now have a wealth of experience showing that what I call specified or functional information (especially if encoded in digital form) does not arise from purely physical or chemical antecedents [[–> i.e. by blind, undirected forces of chance and necessity]. Indeed, the ribozyme engineering and pre-biotic simulation experiments that Professor Falk commends to my attention actually lend additional inductive support to this generalization. On the other hand, we do know of a cause—a type of cause—that has demonstrated the power to produce functionally-specified information. That cause is intelligence or conscious rational deliberation. As the pioneering information theorist Henry Quastler once observed, “the creation of information is habitually associated with conscious activity.” And, of course, he was right. Whenever we find information—whether embedded in a radio signal, carved in a stone monument, written in a book or etched on a magnetic disc—and we trace it back to its source, invariably we come to mind, not merely a material process. Thus, the discovery of functionally specified, digitally encoded information along the spine of DNA, provides compelling positive evidence of the activity of a prior designing intelligence. This conclusion is not based upon what we don’t know. It is based upon what we do know from our uniform experience about the cause and effect structure of the world—specifically, what we know about what does, and does not, have the power to produce large amounts of specified information . . . .

[[In conclusion,] it needs to be noted that the [[now commonly asserted and imposed limiting rule on scientific knowledge, the] principle of methodological naturalism [[ that scientific explanations may only infer to “natural[[istic] causes”] is an arbitrary philosophical assumption, not a principle that can be established or justified by scientific observation itself. Others of us, having long ago seen the pattern in pre-biotic simulation experiments, to say nothing of the clear testimony of thousands of years of human experience, have decided to move on. We see in the information-rich structure of life a clear indicator of intelligent activity and have begun to investigate living systems accordingly. If, by Professor Falk’s definition, that makes us philosophers rather than scientists, then so be it. But I suspect that the shoe is now, instead, firmly on the other foot. [[Meyer, Stephen C: Response to Darrel Falk’s Review of Signature in the Cell, SITC web site, 2009. (Emphases and parentheses added.)]

Let me focus attention on the highlighted:

First, intelligent agents have demonstrated the capacity to produce large amounts of functionally specified information (especially in a digital form). Second, no undirected chemical process has demonstrated this power. Hence, intelligent design provides the best—most causally adequate—explanation for the origin of the information necessary to produce the first life from simpler non-living chemicals.

The only difference between this and what I have highlighted through the acronym FSCO/I, is that functionally specific organisation is similarly reducible to an informational string and is in this sense equivalent to it. Where, that is hardly news, AutoCAD has reigned supreme as an engineers design tool for decades now. Going back to 1973, Orgel in his early work on specified complexity, wrote:

. . . In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity . . . .

[HT, Mung, fr. p. 190 & 196:] These vague idea can be made more precise by introducing the idea of information. Roughly speaking, the information content of a structure is the minimum number of instructions needed to specify the structure. [–> this is of course equivalent to the string of yes/no questions required to specify the relevant “wiring diagram” for the set of functional states, T, in the much larger space of possible clumped or scattered configurations, W, as Dembski would go on to define in NFL in 2002, also cf here, here and here (with here on self-moved agents as designing causes).] One can see intuitively that many instructions are needed to specify a complex structure. [–> so if the q’s to be answered are Y/N, the chain length is an information measure that indicates complexity in bits . . . ] On the other hand a simple repeating structure can be specified in rather few instructions. [–> do once and repeat over and over in a loop . . . ] Complex but random structures, by definition, need hardly be specified at all . . . . Paley was right to emphasize the need for special explanations of the existence of objects with high information content, for they cannot be formed in nonevolutionary, inorganic processes. [The Origins of Life (John Wiley, 1973), p. 189, p. 190, p. 196.]

So, the concept of reducing functional organisation to a description on a string of y/n structured questions — a bit string in some description language — is hardly news, nor is it something I came up with. Where obviously Orgel is speaking to FUNCTIONAL specificity, so that is not new either.

Likewise, search spaces or config spaces is a simple reflection of the phase space concept of statistical thermodynamics.

Dembski’s remarks are also significant, here from NFL:

p. 148:“The great myth of contemporary evolutionary biology is that the information needed to explain complex biological structures can be purchased without intelligence. My aim throughout this book is to dispel that myth . . . . Eigen and his colleagues must have something else in mind besides information simpliciter when they describe the origin of information as the central problem of biology.

I submit that what they have in mind is specified complexity, or what equivalently we have been calling in this Chapter Complex Specified information or CSI . . . .

Biological specification always refers to function. An organism is a functional system comprising many functional subsystems. . . . In virtue of their function [[a living organism’s subsystems] embody patterns that are objectively given and can be identified independently of the systems that embody them. Hence these systems are specified in the sense required by the complexity-specificity criterion . . . the specification can be cashed out in any number of ways [[through observing the requisites of functional organisation within the cell, or in organs and tissues or at the level of the organism as a whole. Dembski cites:

Wouters, p. 148: “globally in terms of the viability of whole organisms,”

Behe, p. 148: “minimal function of biochemical systems,”

Dawkins, pp. 148 – 9: “Complicated things have some quality, specifiable in advance, that is highly unlikely to have been acquired by ran-| dom chance alone. In the case of living things, the quality that is specified in advance is . . . the ability to propagate genes in reproduction.”

On p. 149, he roughly cites Orgel’s famous remark from 1973, which exactly cited reads:

In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity . . .

And, p. 149, he highlights Paul Davis in The Fifth Miracle: “Living organisms are mysterious not for their complexity per se, but for their tightly specified complexity.”] . . .”

p. 144: [[Specified complexity can be more formally defined:] “. . . since a universal probability bound of 1 [[chance] in 10^150 corresponds to a universal complexity bound of 500 bits of information, [[the cluster] (T, E) constitutes CSI because T [[ effectively the target hot zone in the field of possibilities] subsumes E [[ effectively the observed event from that field], T is detachable from E, and and T measures at least 500 bits of information . . . ”

So, the problem of refusal to attend to readily available, evidence or even evidence put in front of objectors to design theory is significant and clear.

What it in the end reflects as a case of clinging to fallacies and myths in the teeth of correction for years on end, is the weakness of the case being made against design by its persistent objectors.

Which is itself highly significant.>>

Now, let us discuss, duly noting the highlighted and emphasised. END

Comments
critical rationalist,
Usually when you criticize a theory, you start with the actual theory, rather than a straw man. But if you “don’t get” that, well, I’m not sure how I can help.
Write it up, man. Please. Again, I'll give you a head post and you can tell us about this remarkable theory that resolves the information problem in biology that origin of life researchers have been grappling with for decades. I'm sure once you explain it to us in some clear detail we'll be able to understand.Eric Anderson
March 12, 2017
March
03
Mar
12
12
2017
05:08 PM
5
05
08
PM
PDT
@Origenes Usually when you criticize a theory, you start with the actual theory, rather than a straw man. But if you "don't get" that, well, I'm not sure how I can help.critical rationalist
March 12, 2017
March
03
Mar
12
12
2017
03:34 PM
3
03
34
PM
PDT
CR: Because they are not merely, “separated by time”.
A series of random mutations is not random because they are not merely separated by time? I don't get it.Origenes
March 12, 2017
March
03
Mar
12
12
2017
10:10 AM
10
10
10
AM
PDT
@Origenes
Why not? The net variation is the result of a series of random mutations. If one base mutation is random, why is a collection of, say, 100 random base mutations, separated by time, not random?
Because they are not merely, "separated by time".critical rationalist
March 12, 2017
March
03
Mar
12
12
2017
09:27 AM
9
09
27
AM
PDT
@Phinehas I wrote:
Software developers are not abstract designers. They are concrete with defined limitations, such as what they know and when they know it, etc. Human knowledge genuinely grows, where it did not exist before, via conjecture and criticism. Are you willing to impose such limitations and conditions on ID’s designer?
You wrote:
Is there any reason I should assume necessarily that “ID’s designer” doesn’t have such limitations and conditions?
As a software developer, you know we currently cannot simply rewrite an entire application overnight to migrate from, say, Win32 to C#. It's simply not practical due to our limitations. However, ID's designer is abstract and has no defined limitations. As such it's not limited by what it knows, when it knew it, what resources or time has at it's disposal, etc. So, it's not limited from rewriting an application, in it's entirety, for every single customer, to meet their specific needs that day. And the same could be said about designing computers. Entire one-off operating systems could be written for each one-off computer built for each customer, along with one-off versions off each application to run on them. Nor is ID's designer limited from creating one-off programming languages for each customer's application. To use another example, we currently do not design entirely new automobiles every year because doing so is simply too resource intensive, expensive, etc. It's simply not practical. Even then, new models often reuse existing parts and even the same power train because next gen engines need to be long term tested on the track, etc. Manufactures must price their cars so customers will by them, so they can make a profit. They must report to their shareholders and request R&D budgets. However, ID's designer would not be limited from designing an entirely new model, from the ground up for every single vehicle. This is because it has no limitations on what it knows, such as if a design is crash worthy, if it has long term engineering issues, etc. It has no customers, competitors, shareholders, R&D budgets .etc. Nor is it limited from designing automobiles in the order of most complex to least, or even all at once. IOW, what you're appealing to are today's human designers, and human beings could not have designed themselves. Even then, that appeal won't hold. At some point in the future, assuming we create the necessary knowledge in time to prevent ourselves from going extinct, we'll use exponentially more powerful computers that we have now to create one off systems and products for each customer, in conjunction with vastly more capable manufacturing systems which make 3D printing look like child's play. The need for reuse will simply be virtually nonexistent. Heck, customers will do it in their own homes and garages. So will their *children*. IOW, you greatly underestimate the role that knowledge, or the lack there off, plays in design. Human beings are good explanations for human deigned things, precisely because of our current limitations.
Why shouldn’t I just follow the evidence where it leads? I’m willing to do that. Are you?
You seem to be confused about the role that evidence plays. Theories are tested by observations, not derived from them. So, you cannot "follow" evidence in the sense you're referring to. From the article "What did Karl Popper Really Say about Evolution"
What Popper calls the historical sciences do not make predictions about long past unique events (postdictions), which obviously would not be testable. (Several recent authors—including Stephen Jay Gould in Discover, July 1982—make this mistake.) These sciences make hypotheses involving past events which must predict (that is, have logical consequences) for the present state of the system in question. Here the testing procedure takes for granted the general laws and theories and is testing the specific conditions (or initial conditions, as Popper usually calls them) that held for the system. A scientist, on the basis of much comparative anatomy and physiology, might hypothesize that, in the distant past, mammals evolved from reptiles. This would have testable consequences for the present state of the system (earth's surface with the geological strata in it and the animal and plant species living on it) in the form of reptile-mammal transition fossils that should exist, in addition to other necessary features of the DNA, developmental systems, and so forth, of the present-day reptiles and mammals.
critical rationalist
March 12, 2017
March
03
Mar
12
12
2017
08:55 AM
8
08
55
AM
PDT
CR: Again, you are referring to variation in a single iteration of the loop. I’m referring to the net variation that occurs across multiple loops. If all you had was variation, the net variation would be random. But that’s not the case.
Why not? The net variation is the result of a series of random mutations. If one base mutation is random, why is a collection of, say, 100 random base mutations, separated by time, not random?Origenes
March 12, 2017
March
03
Mar
12
12
2017
03:34 AM
3
03
34
AM
PDT
Again, you are referring to variation in a single iteration of the loop. I'm referring to the net variation that occurs across multiple loops. If all you had was variation, the net variation would be random. But that's not the case.
I see how one might think your second sentence could support the notion that evolution is not random, but I don’t see how it says anything at all about variation not being random.
There are constraints regarding what kinds of variations that can occur in DNA. In addition it is thought that mutations are not distributed equally and that some repair mechanisms are more effective in some areas than others, which can skew the results of mutations and even cause mutations themselves in the process. But they are random to any problem to solve. The key thing is that, in both the case of people and evolution, variation and conjectures are not guaranteed to be correct. We start out knowing they contain errors. In the case of people, the contents of our theories are not derived from observations. And in the case of evolution, the content is not mechanically transcribed or derived from some preexisting source.critical rationalist
March 10, 2017
March
03
Mar
10
10
2017
05:26 PM
5
05
26
PM
PDT
CR:
Are you willing to impose such limitations and conditions on ID’s designer?
Is there any reason I should assume necessarily that "ID's designer" doesn't have such limitations and conditions? Why shouldn't I just follow the evidence where it leads? I'm willing to do that. Are you? In short, it appears we both agree that designers as well as evolution share the characteristic of creating in the order described by the evidence. Progress!Phinehas
March 10, 2017
March
03
Mar
10
10
2017
03:00 PM
3
03
00
PM
PDT
@Phinehas I wrote:
On the other hand, there is no necessary order for an abstract designer because it has no limitation on what knowledge it possessed or when it possessed it. Any order would be compatible with such a designer, including the most complex to least complex, or even all at once.
You wrote:
You’ve never actually worked on a large software project, have you?
Yes, I have. I'm working on one right now. This is what I mean by assuming we know nothing about how human designers design things. Software developers are not abstract designers. They are concrete with defined limitations, such as what they know and when they know it, etc. Human knowledge genuinely grows, where it did not exist before, via conjecture and criticism. Are you willing to impose such limitations and conditions on ID's designer? An organism could not be "built" until the knowledge of what transformations necessary to construct them from raw materials was created. That is a necessary consequence of variation and selection. However, ID's designer has no such limitations. It merely has the property of "design", which is like saying fire has the property of dryness. As such there is no limitation on what it knew and when it kew it. So, it isn't limited from having the knowledge of how to build any organism that has, does or could exist. That means it could have created them in the order of most complex to least, or all at once. At best, one could say "that's just what the abstract designer must have wanted" Also, software engineers are well adapted to the process of designing software. As such they exhibit the appearance of design and would need to be explained, etc. I don't see how adding a designer to the mix in regards to biological complexity improves the problem because it relies on the pre-existence of, well, a designer, which would be well adapted to the task of designing organisms. Or are you saying there can be a designer that isn't well adapted to designing things? How would that work, exactly? Can just anything design something? Again, that would be like saying fire has the property of dryness. Do you have evidence of designers that are not themselves complex and well adapted for the purpose of designing things? If you're going to limit theories to what we have observations of (which is bad philosophy, by the way), every designer we've observed has had a complex, material brains. So a designer cannot be the solution to the problem. To summarize, some designer that "just was" complete with the knowledge of just the right genes would result in just the right proteins that would results in just the right features, already present, does't serve an explanatory purpose. That's because, one can more efficiently state that organisms "just appeared", complete with the knowledge of just the right genes would result in just the right proteins that would results in just the right features, already present. Neither case accounts for the origin of that knowledge. And, no, the latter is not evolutionary theory, BTW.critical rationalist
March 10, 2017
March
03
Mar
10
10
2017
02:21 PM
2
02
21
PM
PDT
CR: I interpreted "it" the way I did because it made even less sense for "it" to refer to evolution. When "it" refers to evolution, your statement just becomes a non sequitur. Variation in the process of evolution is not completely random. This is because evolution is a repeating process of variation and selection, not just variation on its own. Variation is a component of evolution, not the other way around. How does what evolution is explain what variation is? Or how it is not random? I see how one might think your second sentence could support the notion that evolution is not random, but I don't see how it says anything at all about variation not being random.Phinehas
March 10, 2017
March
03
Mar
10
10
2017
02:13 PM
2
02
13
PM
PDT
@Phinehas You seemed to have misinterpreted what I mean by "it" in that sentence. "it" refers to the process of evolution, not variation. The role that variation ultimately plays across multiple loops is not random, it is random to any problem to solve. It does not need to start over anew with each loop but builds on other solutions. So, while specific variations in a single iteration of the loop is random, the resulting variations that accumulate are not. Think of human knowledge, which uses a vast number of auxiliary theories that themselves were the result of conjectures and criticized, etc. The key point being that variations are not guaranteed to solve a problem. When people conjecture theories, they are in the context of a problem. But they are not derived from anything, such as experience. They are guesses. There is no source that we can turn to as a last resource that will not lead us into error.critical rationalist
March 10, 2017
March
03
Mar
10
10
2017
01:46 PM
1
01
46
PM
PDT
CR:
On the other hand, there is no necessary order for an abstract designer because it has no limitation on what knowledge it possessed or when it possessed it. Any order would be compatible with such a designer, including the most complex to least complex, or even all at once.
You've never actually worked on a large software project, have you?
Furthermore, this designer apparently intentionally and unnecessarily decided to use an order that would be only necessary for evolutionary theory. Didn’t this designer realize how this would look? Was the designer surprised that evolutionary theory would be proposed based on that order?
Yeah, software designers apparently intentionally and unnecessarily decide to use this same order over and over again. Evidently, they don't realize it would be only necessary for evolutionary theory and other purposeless processes that have no end goal in mind. I imagine they would be surprised that evolutionary theory would be proposed as the origin of their software based on the described order.Phinehas
March 10, 2017
March
03
Mar
10
10
2017
11:39 AM
11
11
39
AM
PDT
CR:
Variation[1] in the process of evolution is not completely random. This is because it’s a repeating process of variation[2] and selection, not just variation[2] on its own.
You've included 'variation[2]' in your description of what 'variation[1]' is. I hope you can see how confusing this could be. Obviously, [1] cannot be synonymous with [2].* I've been talking about [2], which is clearly random, is it not? I'm not sure what you mean by [1]. Can you elucidate? *In case this isn't obvious enough... variation = variation + selection variation = (variation + selection) + selection variation = ((variation + selection) + selection) + selection ...Phinehas
March 10, 2017
March
03
Mar
10
10
2017
11:15 AM
11
11
15
AM
PDT
If knowledge grows via variation and selection, then empirical observations can we use to test that theory? One example is the order in which organisms appear. Evolution could not result in organisms appearing in most complex to least complex. Nor could it appear all at once. The order in which organisms appear is necessary consequence of the theory that complexity grows via variation and selection. On the other hand, there is no necessary order for an abstract designer because it has no limitation on what knowledge it possessed or when it possessed it. Any order would be compatible with such a designer, including the most complex to least complex, or even all at once. There are significantly fewer necessary consequences of such an abstract designer, for which we can make empirical tests. Furthermore, this designer apparently intentionally and unnecessarily decided to use an order that would be only necessary for evolutionary theory. Didn't this designer realize how this would look? Was the designer surprised that evolutionary theory would be proposed based on that order? IOW, evolutionary theory (complexity grows via variation and selection) explains that order, while ID does not. That order must be "just what the designer wanted", which is a bad explanation.critical rationalist
March 10, 2017
March
03
Mar
10
10
2017
09:52 AM
9
09
52
AM
PDT
@KF
CR, did you notice how error correction of coded information occurs?
Can you be more specific? As mentioned in the paper....
The information in the recipe is an abstract constructor that I shall call knowledge (without a knowing subject [26]). Knowledge has an exact characterization in constructor theory: it is information that can act as a constructor and cause itself to remain instantiated in physical substrates. Crucially, error-correcting the replication is necessary. Hence the subunits pi must assume values in a discrete (digital) information variable: one whose attributes are separated by non-allowed attributes. For, if all values in a continuum were allowed, error-correction would be logically impossible..
So, this is one simpler form of error correction. Analog values are not copied exactly, which allows errors to build up.
As for discussion on books and manuals, intelligent, volitional action is even more explicitly present. So, kindly explain to us how you plan to demonstrate FSCO/I arising from lucky noise starting at arbitrary configurations filtered for function with elimination of non-functional forms
Yes, KF, books written by people contain explanatory knowledge, which only people can create. I've already addressed the difference in the kinds of knowledge. Knowledge in the genome is not explanatory in nature. It is non-explanatory, in that it represents useful rules of thumb that have a very limited reach. On the other hand, the knowledge in books, which were created by people has a significantly greater reach. For example, take the laryngeal Nerve in the neck of a giraffe. As its neck became longer, the knowledge in a giraffe's genome did not not contain an explanatory theory about routing necessary to reroute the nerve so it didn't go all the way down the neck, around the heart, and then back up again. It's reach is significantly limited. One exception to this, which UB might be alluding to, is that DNA does have significant reach, in that it can be used to encode which transformations of matter necessary to convert raw materials into all organisms in the biosphere. However, this represents a leap to universality, not explanatory knowledge. Before the first universal number system was created, people developed systems that were not universal. In fact, some systems could had been universal if it not for additional rules that were added to prevent it. And the same is said with the universality of computation, universal letter systems, etc. They all evolved from much simpler systems and made a disproportional leap to universality when a single addition was made, which was often unintentional and not planned. They are examples of emergent properties of matter.
Explain how one gets TO islands of function that way on config spaces of scale 10^150 – 10^301 or worse, then let us know how we get to D/RNA and metabolising cells from a Darwin’s pond or the like, with empirical observational warrant.
I've already given the explanation, KF. All knowledge grows via variation and criticism. It's a universal theory of the growth of knowledge in brains, books and even the genome of organisms. Evolutionary theory doesn't suggest any specific features in biology were intentional targets to hit. Yet the numbers you quote assume they were. Some other solution could have occurred instead. And they were initial formed from simpler solutions, etc. Again, evolution isn't completely random, it's random to any specific problem to solve. Complexity grows via variation and selection in a piecemeal fashion.critical rationalist
March 10, 2017
March
03
Mar
10
10
2017
09:33 AM
9
09
33
AM
PDT
CR, did you notice how error correction of coded information occurs? As in via built-in organised redundancy and algorithms designed to detect same, that have to run on appropriate hardware set up to detect and correct? (Try, a simple 3-m, triple repetition and bitwise or equivalent voting code.) The phenomenon you are appealing to is based on codes, thus language, protocols (rules that manage contingency!) and integrated, organised, information rich systems to carry out algorithmic functions. Algorithms are yet another level. In short, your whole discussion is riddled with the issue of functionally specific, coherently organised, complex information. From just the text you wrote we readily see the source of such FSCO/I, intelligently directed configuration, i.e. design. As for discussion on books and manuals, intelligent, volitional action is even more explicitly present. So, kindly explain to us how you plan to demonstrate FSCO/I arising from lucky noise starting at arbitrary configurations filtered for function with elimination of non-functional forms. Explain how one gets TO islands of function that way on config spaces of scale 10^150 - 10^301 or worse, then let us know how we get to D/RNA and metabolising cells from a Darwin's pond or the like, with empirical observational warrant. Absent this, we have a perfect right to conclude that you are doing little more than putting up ideological posturing in the teeth of a trillion member observation base on the source of FSCO/I backed up by needle in haystack search challenge analysis. KFkairosfocus
March 10, 2017
March
03
Mar
10
10
2017
05:42 AM
5
05
42
AM
PDT
@Origenes You're attempting to conflate random changes in problem spaces with error correction being random in respect to some problem space. The latter is not random. You're presenting a false dichotomy that, unless every aspect of a process is not random, then the output must be completely random. How does being a "free, rational person" enables us to always start out with the right conjectured solutions to problems in the first place - preventing the process from being completely random? So, what's the difference? IOW, it seems that you've arbitrary decided what's random and what's not. Again, I'm making a distinction between non-explanatory knowege and explanatory knowege. Only people can create non-explanatory knowege. So, I'm not saying the creation of knowege or even the kind of knowege evolution creates is exactly the same as a rational person. I'm saying they can be explained by the same universal theory of how knowege grows. In fact, I'm saying there is a common explanation where you apparently think none can exist since they supposedly cannot be compared at all. In the case of non-explanatory knowege, people accidentally discover solutions to problems they never intended to solve by accidentally testing solutions they didn't conceive of in the first place via unintended circumances, the "dumb luck" of being at the right place at the right time, etc. Does that make the outcome completely random? No it does not. Some research data is randomly destroyed via random accidents. Researchers stop their work due to random reasons, such as loosing their job, contracting an illness, randomly running into a future spouse and deciding to change jobs to accommodate them, etc. Does the loss of research in those specific instances somehow make all research completely random? No, it does not. The knowege in books play a cause role in their being retained when embedded in a storage medium. They get reprinted. They are put on a shelf to be referenced at a later date, rather than recycled. A repair manual for a car is reprinted because it contains the knowege of how to repair cars that are still on the road and that people want to fix. Should that no longer be the case, it will stop playing a causal role. The same can be said for knowege in brains and even the genome.critical rationalist
March 10, 2017
March
03
Mar
10
10
2017
03:39 AM
3
03
39
AM
PDT
PS: Cicero, c. 50 BC:
Is it possible for any man to behold these things, and yet imagine that certain solid and individual bodies move by their natural force and gravitation, and that a world so beautifully adorned was made by their fortuitous concourse? He who believes this may as well believe that if a great quantity of the one-and-twenty letters, composed either of gold or any other matter, were thrown upon the ground, they would fall into such order as legibly to form the Annals of Ennius. I doubt whether fortune could make a single verse of them. How, therefore, can these people assert that the world was made by the fortuitous concourse of atoms, which have no color, no quality—which the Greeks call [poiotes], no sense? [Cicero, THE NATURE OF THE GODS BK II Ch XXXVII, C1 BC, as trans Yonge (Harper & Bros., 1877), pp. 289 - 90.]
--> Yes the concept and inferred significance of FSCO/I on observing coded text etc is THAT old, at least.kairosfocus
March 9, 2017
March
03
Mar
9
09
2017
10:14 PM
10
10
14
PM
PDT
More on info basics from my always linked note:
A] The Core Question: Information, Messages and Intelligence Since the end of the 1930's, five key trends have emerged, converged and become critical in the worlds of science and technology:
1] Information Technology and computers, starting with the Atanasoff-Berry Computer [ABC], and other pioneering computers in the early 1940's; 2] Communication technology and its underpinnings in information theory, starting with Shannon's breakthrough analysis in 1948; 3] The partial elucidation of the DNA code as the information basis of life at molecular level, since the 1950s, as, say Thaxton reports by citing Sir Francis Crick's March 19, 1953 remarks to his son: "Now we believe that the DNA is a code. That is, the order of bases (the letters) makes one gene different from another gene (just as one page of print is different from another)"; 4] The "triumph" of the Miller-Urey spark-in-gas experiment, also in 1953, which produced several amino acids, the basic building blocks of proteins; but, also, we have seen a persistent failure thereafter to credibly and robustly account for the origin of life through so-called chemical evolution across subsequent decades ; and, 5] The discovery of the intricate finetuning of the parameters in the observed cosmos for life as we know and experience it -- strange as it may seem: again, starting in 1953.
The common issue in all of these lies in the implications of the concepts, communication and information -- i.e. the substance that is communicated . . . . [As a model framework shows] information-bearing messages flow from a source to a sink, by being: (1) encoded, (2) transmitted through a channel as a signal, (3) received, and (4) decoded. At each corresponding stage: source/sink encoding/decoding, transmitting/receiving, there is in effect a mutually agreed standard, a so-called protocol. [For instance, HTTP -- hypertext transfer protocol -- is a major protocol for the Internet. This is why many web page addresses begin: "http://www . . ."] However, as the diagram [--> UD does not readily permit diags in comments, generally] hints at, at each stage noise affects the process, so that under certain conditions, detecting and distinguishing the signal from the noise becomes a challenge. Indeed, since noise is due to a random fluctuating value of various physical quantities [due in turn to the random behaviour of particles at molecular levels], the detection of a message and accepting it as a legitimate message rather than noise that got lucky, is a question of inference to design. In short, inescapably, the design inference issue is foundational to communication science and information theory. Let us note, too, that similar empirically testable inferences to intelligent agency are a commonplace in forensic science, archaeology, pharmacology and a great many fields of pure and applied science. Climatology is an interesting case: the debate over anthropogenic climate change is about unintended consequences of the actions of intelligent agents. Thus, Dembski's definition of design theory as a scientific project through pointed question and answer is apt:
intelligent design begins with a seemingly innocuous question: Can objects, even if nothing is known about how they arose, exhibit features that reliably signal the action of an intelligent cause? . . . Proponents of intelligent design, known as design theorists, purport to study such signs formally, rigorously, and scientifically. Intelligent design may therefore be defined as the science that studies signs of intelligence. [BTW, it is sad but necessary to highlight what should be obvious: namely, that it is only common academic courtesy (cf. here, here, here, here, here and here!) to use the historically justified definition of a discipline that is generally accepted by its principal proponents.]
So, having now highlighted what is at stake, we next clarify two key underlying questions. Namely, what is "information"? Then, why is it seen as a characteristic sign of intelligence at work? First, let us identify what intelligence is. This is fairly easy: for, we are familiar with it from the characteristic behaviour exhibited by certain known intelligent agents -- ourselves. Specifically, as we know from experience and reflection, such agents take actions and devise and implement strategies that creatively address and solve problems they encounter; a functional pattern that does not depend at all on the identity of the particular agents. In short, intelligence is as intelligence does. So, if we see evident active, intentional, creative, innovative and adaptive [as opposed to merely fixed instinctual] problem-solving behaviour similar to that of known intelligent agents, we are justified in attaching the label: intelligence. [Note how this definition by functional description is not artificially confined to HUMAN intelligent agents: it would apply to computers, robots, the alleged alien residents of Area 51, Vulcans, Klingons or Kzinti, or demons or gods, or God.] But also, in so solving their problems, intelligent agents may leave behind empirically evident signs of their activity; and -- as say archaeologists and detectives know -- functionally specific, complex information [FSCI] [--> the reference to organisation extends this] that would otherwise be utterly improbable, is one of these signs. Such preliminary points should also immediately lay to rest the assertion in some quarters that inference to design is somehow necessarily "unscientific" -- as, such is said to always and inevitably be about improperly injecting "the supernatural" into scientific discourse. (We hardly need to detain ourselves here with the associated claim that intelligence is a "natural" phenomenon, one that spontaneously emerges from the biophysical world; for that is plainly one of the issues to be settled by investigation and analysis in light of empirical data, conceptual issues and comparative difficulties, not dismissed by making question-begging evolutionary materialist assertions. Cf App 6 below. [Also, HT StephenB, a longstanding commenter at the Uncommon Descent [UD] blog, for deeply underscoring the significance of the natural/supernatural issue and for providing incisive comments, which have materially helped shape the below.]) Now, Dembski's definition just above draws on the common-sense point that: [a] we may quite properly make a significantly different contrast from "natural vs. supernatural": i.e. "natural" vs. "artificial." [Where "natural" = "spontaneous" and/or "tracing to chance + necessity as the decisive causal factors" -- what we may term material causes; and, "artificial" = "intelligent."] He and other major design thinkers therefore propose that: [b] we may then set out to identify key empirical/ scientific factors (= "signs of intelligence") to reliably mark the distinction. One of these, is that when we see regularities of nature, we are seeing low contingency, reliably observable, spontaneous patterns and therefore scientifically explain such by law-like mechanical necessity: e.g. an unsupported heavy object, reliably, falls by "force of gravity." But, where we see instead high contingency -- e.g., which side of a die will be uppermost when it falls -- this is chance ["accident"] or intent ["design"]. Then, if we further notice that the observed highly contingent pattern is otherwise very highly improbable [i.e. "complex"] and is independently functionally specified, it is most credible that it is so by design, not accident. (Think of a tray of several hundreds of dice, all with "six" uppermost: what is its best explanation -- mechanical necessity, chance, or intent? [Cf further details below.]) Consequently, we can easily see that [c] the attempt to infer or assert that intelligent design thought invariably constitutes "a 'smuggling-in' of 'the supernatural' " (as opposed to explanation by reference to the "artificial" or "intelligent") as the contrast to "natural," is a gross error; one that not only begs the question but also misunderstands, neglects or ignores (or even sometimes, sadly, calculatedly distorts) the explicit definition of ID and its methods of investigation as has been repeatedly published and patiently explained by its leading proponents. (Cf. here for a detailed case study on how just this -- too often, sadly, less than innocent -- mischaracterisation of Design Theory is used by secularist advocates such as the ACLU.) Further, given the significance of what routinely happens when we see an apparent message, we know or should know that [d] we routinely and confidently infer from signs of intelligence to the existence and action of intelligence. On this, we should therefore again observe that Sir Francis Crick noted to his son, Michael, in 1953, in the already quoted letter: "Now we believe that the DNA is a code. That is, the order of bases (the letters) makes one gene different from another gene (just as one page of print is different from another)." For, complex, functional messages, per reliable observation, credibly trace to intelligent senders. This holds, even where in certain particular cases one may then wish to raise the subsequent question: what is the identity (or even, nature) of the particular intelligence inferred to be the author of certain specific messages? In turn, this may lead to broader, philosophical -- that is, worldview level -- questions. Observe carefully, though: [e] such questions go beyond the "belt" of science theories, proper, into the worldview-tinged issues that -- as Imre Lakatos reminded us -- are embedded in the inner core of scientific research programmes, and are in the main addressed through philosophical rather than specifically scientific methods. [It helps to remember that for a long time, what we call "science" today was termed "natural philosophy."] Also, I think it is wiser to acknowledge that we have no satisfactory explanation of a matter, rather than insist that one will only surrender one's position (which has manifestly failed after reasonable trials) if a "better" one emerges -- all the while judging "better" by selectively hyperskeptical criteria. In short, those who would make such a rhetorical dismissal, would do well to ponder anew the cite at the head of this web page. For, the key insight of Cicero [C1 BC!] is that, in particular, a sense-making (thus, functional), sufficiently complex string of digital characters is a signature of a true message produced by an intelligent actor, not a likely product of a random process. He then [logically speaking] goes on to ask concerning the evident FSCI in nature, and challenges those who would explain it by reference to chance collocations of atoms. That is a good challenge, and it is one that should not be ducked by worldview-level begging of serious definitional questions or -- worse -- shabby rhetorical misrepresentations and manipulations. Therefore, let us now consider in a little more detail a situation where an apparent message is received. What does that mean? What does it imply about the origin of the message . . . or, is it just noise that "got lucky"? If an apparent message is received, it means that something is working as an intelligible -- i.e. functional -- signal for the receiver. In effect, there is a standard way to make and send and recognise and use messages in some observable entity [e.g. a radio, a computer network, etc.], and there is now also some observed event, some variation in a physical parameter, that corresponds to it. [For instance, on this web page as displayed on your monitor, we have a pattern of dots of light and dark and colours on a computer screen, which correspond, more or less, to those of text in English.] Information theory, as Fig A.1 illustrates, then observes that if we have a receiver, we credibly have first had a transmitter, and a channel through which the apparent message has come; a meaningful message that corresponds to certain codes or standard patterns of communication and/or intelligent action. [Here, for instance, through HTTP and TCP/IP, the original text for this web page has been passed from the server on which it is stored, across the Internet, to your machine, as a pattern of binary digits in packets. Your computer then received the bits through its modem, decoded the digits, and proceeded to display the resulting text on your screen as a complex, functional coded pattern of dots of light and colour. At each stage, integrated, goal-directed intelligent action is deeply involved, deriving from intelligent agents -- engineers and computer programmers. We here consider of course digital signals, but in principle anything can be reduced to such signals, so this does not affect the generality of our thoughts.] Now, it is of course entirely possible, that the apparent message is "nothing but" a lucky burst of noise that somehow got through the Internet and reached your machine. That is, it is logically and physically possible [i.e. neither logic nor physics forbids it!] that every apparent message you have ever got across the Internet -- including not just web pages but also even emails you have received -- is nothing but chance and luck: there is no intelligent source that actually sent such a message as you have received; all is just lucky noise:
"LUCKY NOISE" SCENARIO: Imagine a world in which somehow all the "real" messages sent "actually" vanish into cyberspace and "lucky noise" rooted in the random behaviour of molecules etc, somehow substitutes just the messages that were intended -- of course, including whenever engineers or technicians use test equipment to debug telecommunication and computer systems! Can you find a law of logic or physics that: [a] strictly forbids such a state of affairs from possibly existing; and, [b] allows you to strictly distinguish that from the "observed world" in which we think we live? That is, we are back to a Russell "five- minute- old- universe"-type paradox. Namely, we cannot empirically distinguish the world we think we live in from one that was instantly created five minutes ago with all the artifacts, food in our tummies, memories etc. that we experience. We solve such paradoxes by worldview level inference to best explanation, i.e. by insisting that unless there is overwhelming, direct evidence that leads us to that conclusion, we do not live in Plato's Cave of deceptive shadows that we only imagine is reality, or that we are "really" just brains in vats stimulated by some mad scientist, or we live in a The Matrix world, or the like. (In turn, we can therefore see just how deeply embedded key faith-commitments are in our very rationality, thus all worldviews and reason-based enterprises, including science. Or, rephrasing for clarity: "faith" and "reason" are not opposites; rather, they are inextricably intertwined in the faith-points that lie at the core of all worldviews. Thus, resorting to selective hyperskepticism and objectionism to dismiss another's faith-point [as noted above!], is at best self-referentially inconsistent; sometimes, even hypocritical and/or -- worse yet -- willfully deceitful. Instead, we should carefully work through the comparative difficulties across live options at worldview level, especially in discussing matters of fact. And it is in that context of humble self consistency and critically aware, charitable open-mindedness that we can now reasonably proceed with this discussion.)
In short, none of us actually lives or can consistently live as though s/he seriously believes that: absent absolute proof to the contrary, we must believe that all is noise. [To see the force of this, consider an example posed by Richard Taylor. You are sitting in a railway carriage and seeing stones you believe to have been randomly arranged, spelling out: "WELCOME TO WALES." Would you believe the apparent message? Why or why not?] Q: Why then do we believe in intelligent sources behind the web pages and email messages that we receive, etc., since we cannot ultimately absolutely prove that such is the case? ANS: Because we believe the odds of such "lucky noise" happening by chance are so small, that we intuitively simply ignore it. That is, we all recognise that if an apparent message is contingent [it did not have to be as it is, or even to be at all], is functional within the context of communication, and is sufficiently complex that it is highly unlikely to have happened by chance, then it is much better to accept the explanation that it is what it appears to be -- a message originating in an intelligent [though perhaps not wise!] source -- than to revert to "chance" as the default assumption. Technically, we compare how close the received signal is to legitimate messages, and then decide that it is likely to be the "closest" such message. (All of this can be quantified, but this intuitive level discussion is enough for our purposes.) In short, we all intuitively and even routinely accept that: Functionally Specified, Complex Information, FSCI, is a signature of messages originating in intelligent sources. Thus, if we then try to dismiss the study of such inferences to design as "unscientific," when they may cut across our worldview preferences, we are plainly being grossly inconsistent. Further to this, the common attempt to pre-empt the issue through the attempted secularist redefinition of science as in effect "what can be explained on the premise of evolutionary materialism - i.e. primordial matter-energy joined to cosmological- + chemical- + biological macro- + sociocultural- evolution, AKA 'methodological naturalism' " [ISCID def'n: here] is itself yet another begging of the linked worldview level questions. For in fact, the issue in the communication situation once an apparent message is in hand is: inference to (a) intelligent -- as opposed to supernatural -- agency [signal] vs. (b) chance-process [noise]. Moreover, at least since Cicero, we have recognised that the presence of functionally specified complexity in such an apparent message helps us make that decision. (Cf. also Meyer's closely related discussion of the demarcation problem here.) More broadly the decision faced once we see an apparent message, is first to decide its source across a trichotomy: (1) chance; (2) natural regularity rooted in mechanical necessity (or as Monod put it in his famous 1970 book, echoing Plato, simply: "necessity"); (3) intelligent agency. These are the three commonly observed causal forces/factors in our world of experience and observation. [Cf. abstract of a recent technical, peer-reviewed, scientific discussion here. Also, cf. Plato's remark in his The Laws, Bk X, excerpted below.] Each of these forces stands at the same basic level as an explanation or cause, and so the proper question is to rule in/out relevant factors at work, not to decide before the fact that one or the other is not admissible as a "real" explanation. This often confusing issue is best initially approached/understood through a concrete example . . .
A CASE STUDY ON CAUSAL FORCES/FACTORS -- A Tumbling Die: Heavy objects tend to fall under the law-like natural regularity we call gravity. If the object is a die, the face that ends up on the top from the set {1, 2, 3, 4, 5, 6} is for practical purposes a matter of chance. But, if the die is cast as part of a game, the results are as much a product of agency as of natural regularity and chance. Indeed, the agents in question are taking advantage of natural regularities and chance to achieve their purposes!
This concrete, familiar illustration should suffice to show that the three causal factors approach is not at all arbitrary or dubious -- as some are tempted to imagine or assert. [More details . . .] Then also, in certain highly important communication situations, the next issue after detecting agency as best causal explanation, is whether the detected signal comes from (4) a trusted source, or (5) a malicious interloper, or is a matter of (6) unintentional cross-talk. (Consequently, intelligence agencies have a significant and very practical interest in the underlying scientific questions of inference to agency then identification of the agent -- a potential (and arguably, probably actual) major application of the theory of the inference to design.) Next, to identify which of the three is most important/ the best explanation in a given case, it is useful to extend the principles of statistical hypothesis testing through Fisherian elimination to create the Explanatory Filter . . . . The explanatory filter allows for an evidence-based investigation of causal factors. By setting a quite strict threshold between chance and intelligence, i.e. the UPB, a reliable inference to design may be made when we see especially functionally specific, complex information [FSCI] -rich patterns, but at the cost of potentially ruling "chance" incorrectly. UNDERLYING LOGIC: Once the aspect of a process, object or phenomenon under investigation is significantly contingent, natural regularities rooted in mechanical necessity can plainly be ruled out as the dominant factor for that facet. So, the key issue is whether the observed high contingency is unambiguously evidently purposefully directed; relative to the type and body of experiences or observations that would warrant a reliable inductive inference. For this, the UPB sets a reasonable, conservative and reliable threshold: Unless (i) the search resources of the observed cosmos would generally be fruitlessly exhausted in an attempt to arrive at the observed result (or materially similar results) by random searches, AND (ii) the outcome is [especially functionally] specified, observed high contingency is by default assigned to "chance." Thus, FSCI and the associated wider concept, complex, specified information [CSI] are identified as reliable (but not exclusive) signs of intelligence. [In fact, even though -- strictly -- "lucky noise" could account for the existence of apparent messages such as this web page, we routinely identify that if an apparent message has functionality, complexity and specification, it is better explained by intent than by accident and confidently infer to intelligent rather than mechanical cause. This is proof enough -- on pain of self-referentially incoherent selective hyperskepticism -- of just how reasonable the explanatory filter is.] ________________ The second major step is to refine our thoughts, through discussing the communication theory definition of and its approach to measuring information. A good place to begin this is with British Communication theory expert F. R Connor, who gives us an excellent "definition by discussion" of what information is:
From a human point of view the word 'communication' conveys the idea of one person talking or writing to another in words or messages . . . through the use of words derived from an alphabet [NB: he here means, a "vocabulary" of possible signals]. Not all words are used all the time and this implies that there is a minimum number which could enable communication to be possible. In order to communicate, it is necessary to transfer information to another person, or more objectively, between men or machines. This naturally leads to the definition of the word 'information', and from a communication point of view it does not have its usual everyday meaning. Information is not what is actually in a message but what could constitute a message. The word could implies a statistical definition in that it involves some selection of the various possible messages. The important quantity is not the actual information content of the message but rather its possible information content. This is the quantitative definition of information and so it is measured in terms of the number of selections that could be made. Hartley was the first to suggest a logarithmic unit . . . and this is given in terms of a message probability. [p. 79, Signals, Edward Arnold. 1972. Bold emphasis added. Apart from the justly classical status of Connor's series, his classic work dating from before the ID controversy arose is deliberately cited, to give us an indisputably objective benchmark. [ --> It also happens to be where I started from.] ]
To quantify the above definition of what is perhaps best descriptively termed information-carrying capacity, but has long been simply termed information (in the "Shannon sense" - never mind his disclaimers . . .), let us consider a source that emits symbols from a vocabulary: s1,s2, s3, . . . sn, with probabilities p1, p2, p3, . . . pn. That is, in a "typical" long string of symbols, of size M [say this web page], the average number that are some sj, J, will be such that the ratio J/M --> pj, and in the limit attains equality. We term pj the a priori -- before the fact -- probability of symbol sj. Then, when a receiver detects sj, the question arises as to whether this was sent. [That is, the mixing in of noise means that received messages are prone to misidentification.] If on average, sj will be detected correctly a fraction, dj of the time, the a posteriori -- after the fact -- probability of sj is by a similar calculation, dj. So, we now define the information content of symbol sj as, in effect how much it surprises us on average when it shows up in our receiver: I = log [dj/pj], in bits [if the log is base 2, log2] . . . Eqn 1 This immediately means that the question of receiving information arises AFTER an apparent symbol sj has been detected and decoded. That is, the issue of information inherently implies an inference to having received an intentional signal in the face of the possibility that noise could be present. Second, logs are used in the definition of I, as they give an additive property: for, the amount of information in independent signals, si + sj, using the above definition, is such that: I total = Ii + Ij . . . Eqn 2 For example, assume that dj for the moment is 1, i.e. we have a noiseless channel so what is transmitted is just what is received. Then, the information in sj is: I = log [1/pj] = - log pj . . . Eqn 3 This case illustrates the additive property as well, assuming that symbols si and sj are independent. That means that the probability of receiving both messages is the product of the probability of the individual messages (pi *pj); so: Itot = log1/(pi *pj) = [-log pi] + [-log pj] = Ii + Ij . . . Eqn 4 So if there are two symbols, say 1 and 0, and each has probability 0.5, then for each, I is - log [1/2], on a base of 2, which is 1 bit. (If the symbols were not equiprobable, the less probable binary digit-state would convey more than, and the more probable, less than, one bit of information. Moving over to English text, we can easily see that E is as a rule far more probable than X, and that Q is most often followed by U. So, X conveys more information than E, and U conveys very little, though it is useful as redundancy, which gives us a chance to catch errors and fix them: if we see "wueen" it is most likely to have been "queen.") Further to this, we may average the information per symbol in the communication system thusly (giving in termns of -H to make the additive relationships clearer): - H = p1 log p1 + p2 log p2 + . . . + pn log pn or, H = - SUM [pi log pi] . . . Eqn 5 H, the average information per symbol transmitted [usually, measured as: bits/symbol], is often termed the Entropy; first, historically, because it resembles one of the expressions for entropy in statistical thermodynamics. As Connor notes: "it is often referred to as the entropy of the source." [p.81, emphasis added.] Also, while this is a somewhat controversial view in Physics, as is briefly discussed in Appendix 1below, there is in fact an informational interpretation of thermodynamics that shows that informational and thermodynamic entropy can be linked conceptually as well as in mere mathematical form. [--> this bridges back to the clip already given]
We can now see how information, intelligence, design and entropy are all closely linked. Indeed, entropy of a system can be seen as a metric of the average missing info to specify particular micro-state, given only the gross values that characterise macroscopically observable state. Also, it is useful to go beyond the focus on info-carrying capacity to look again at information as a functional issue:
As the third major step, we now turn to information technology, communication systems and computers, which provides a vital clarifying side-light from another view on how complex, specified information functions in information processing systems: [In the context of computers] information is data -- i.e. digital representations of raw events, facts, numbers and letters, values of variables, etc. -- that have been put together in ways suitable for storing in special data structures [strings of characters, lists, tables, "trees" etc], and for processing and output in ways that are useful [i.e. functional]. . . . Information is distinguished from [a] data: raw events, signals, states etc represented digitally, and [b] knowledge: information that has been so verified that we can reasonably be warranted, in believing it to be true. [GEM, UWI FD12A Sci Med and Tech in Society Tutorial Note 7a, Nov 2005.] That is, we have now made a step beyond mere capacity to carry or convey information, to the function fulfilled by meaningful -- intelligible, difference making -- strings of symbols. In effect, we here introduce into the concept, "information," the meaningfulness, functionality (and indeed, perhaps even purposefulness) of messages -- the fact that they make a difference to the operation and/or structure of systems using such messages, thus to outcomes; thence, to relative or absolute success or failure of information-using systems in given environments. And, such outcome-affecting functionality is of course the underlying reason/explanation for the use of information in systems. [Cf. the recent peer-reviewed, scientific discussions here, and here by Abel and Trevors, in the context of the molecular nanotechnology of life.] Let us note as well that since in general analogue signals can be digitised [i.e. by some form of analogue-digital conversion], the discussion thus far is quite general in force. So, taking these three main points together, we can now see how information is conceptually and quantitatively defined, how it can be measured in bits, and how it is used in information processing systems; i.e., how it becomes functional. In short, we can now understand that: Functionally Specific, Complex Information [FSCI] is a characteristic of complicated messages that function in systems to help them practically solve problems faced by the systems in their environments. Also, in cases where we directly and independently know the source of such FSCI (and its accompanying functional organisation) it is, as a general rule, created by purposeful, organising intelligent agents. So, on empirical observation based induction, FSCI is a reliable sign of such design, e.g. the text of this web page, and billions of others all across the Internet. (Those who object to this, therefore face the burden of showing empirically that such FSCI does in fact -- on observation -- arise from blind chance and/or mechanical necessity without intelligent direction, selection, intervention or purpose.) Indeed, this FSCI perspective lies at the foundation of information theory: (i) recognising signals as intentionally constructed messages transmitted in the face of the possibility of noise, (ii) where also, intelligently constructed signals have characteristics of purposeful specificity, controlled complexity and system- relevant functionality based on meaningful rules that distinguish them from meaningless noise; (iii) further noticing that signals exist in functioning generation- transfer and/or storage- destination systems that (iv) embrace co-ordinated transmitters, channels, receivers, sources and sinks. That this is broadly recognised as true, can be seen from a surprising source, Dawkins, who is reported to have said in his The Blind Watchmaker (1987), p. 8: Hitting upon the lucky number that opens the bank's safe [NB: cf. here the case in Brown's The Da Vinci Code] is the equivalent, in our analogy, of hurling scrap metal around at random and happening to assemble a Boeing 747. [NB: originally, this imagery is due to Sir Fred Hoyle, who used it to argue that life on earth bears characteristics that strongly suggest design. His suggestion: panspermia -- i.e. life drifted here, or else was planted here.] Of all the millions of unique and, with hindsight equally improbable, positions of the combination lock, only one opens the lock. Similarly, of all the millions of unique and, with hindsight equally improbable, arrangements of a heap of junk, only one (or very few) will fly. The uniqueness of the arrangement that flies, or that opens the safe, has nothing to do with hindsight. It is specified in advance. [Emphases and parenthetical note added, in tribute to the late Sir Fred Hoyle. (NB: This case also shows that we need not see boxes labelled "encoders/decoders" or "transmitters/receivers" and "channels" etc. for the model in Fig. 1 above to be applicable; i.e. the model is abstract rather than concrete: the critical issue is functional, complex information, not electronics.)] Here, we see how the significance of FSCI naturally appears in the context of considering the physically and logically possible but vastly improbable creation of a jumbo jet by chance. Instantly, we see that mere random chance acting in a context of blind natural forces is a most unlikely explanation, even though the statistical behaviour of matter under random forces cannot rule it strictly out. But it is so plainly vastly improbable, that, having seen the message -- a flyable jumbo jet -- we then make a fairly easy and highly confident inference to its most likely origin: i.e. it is an intelligently designed artifact. For, the a posteriori probability of its having originated by chance is obviously minimal -- which we can intuitively recognise, and can in principle quantify. FSCI is also an observable, measurable quantity; contrary to what is imagined, implied or asserted by many objectors. This may be most easily seen by using a quantity we are familiar with: functionally specific bits [FS bits], such as those that define the information on the screen you are most likely using to read this note: 1 --> These bits are functional, i.e. presenting a sceenful of (more or less) readable and coherent text. 2 --> They are specific, i.e. the screen conforms to a page of coherent text in English in a web browser window; defining a relatively small target/island of function by comparison with the number of arbitrarily possible bit configurations of the screen. 3 --> They are contingent, i.e your screen can show diverse patterns, some of which are functional, some of which -- e.g. a screen broken up into "snow" -- would not (usually) be. 4 --> They are quantitative: a screen of such text at 800 * 600 pixels resolution, each of bit depth 24 [8 each for R, G, B] has in its image 480,000 pixels, with 11,520,000 hard-working, functionally specific bits. 5 --> This is of course well beyond a "glorified common-sense" 500 - 1,000 bit rule of thumb complexity threshold at which contextually and functionally specific information is sufficiently complex that the explanatory filter would confidently rule such a screenful of text "designed," given that -- since there are at most that many quantum states of the atoms in it -- no search on the gamut of our observed cosmos can exceed 10^150 steps . . .
We can go on, but the above is enough backdrop for now. KF PS: Still very busy locally.kairosfocus
March 9, 2017
March
03
Mar
9
09
2017
10:06 PM
10
10
06
PM
PDT
critical rationalist
CR: Is error correction random? Is it random that mutation X is retained, while mutation Y does not?
A severe winter, an epidemic or whatever can wipe out mutation Y. Is that random? You betcha.
No!
Yes!
Mutation Y doesn’t play a causal role in it being retained when instantiated in a storage medium.
By ‘being retained’ you mean ‘not eliminated’? If so, I agree. One could say that ‘being retained’ is the absence of natural selection elimination. Rather meaningless, right?
X does. And it does so because X contains some approximation of truth about some problem in the biosphere, even if the organism cannot comprehend that problem or is not even aware of it.
And I bet that Y also contains ‘some approximation of truth about some problem in the biosphere’, but Y gets eliminated nonetheless. Maybe Y cannot cope with a severe winter, but has a unique solution to a hot summer that will kill X. We will never know, because all the information is lost thanks to that hindrance called ‘natural selection’. “Everyone is world champion in a sport that hasn’t been invented yet”, someone once wrote. The same applies here: every organism has some approximation of truth about some problem in the biosphere. It is a meaningless statement.
People start out with problems, then guess solutions to those problems. Unless we have some infallible way to identify and interpret sources, we start out knowing that our ideas contains errors to some degree. Error correction isn’t random in that case either.
Of course it isn’t. However you cannot compare free responsible rational persons with blind particles bumping into each other.Origenes
March 9, 2017
March
03
Mar
9
09
2017
02:03 PM
2
02
03
PM
PDT
@Origenes
You keep repeating that. I agree 100%. I hold that a proper understanding of evolutionary theory entails that anything creative is due to sheer dumb luck / randomness, so I do agree with you.
Is error correction random? Is it random that mutation X is retained, while mutation Y does not? No! Mutation Y doesn't play a causal role in it being retained when instantiated in a storage medium. X does. And it does so because X contains some approximation of truth about some problem in the biosphere, even if the organism cannot comprehend that problem or is not even aware of it. People start out with problems, then guess solutions to those problems. Unless we have some infallible way to identify and interpret sources, we start out knowing that our ideas contains errors to some degree. Error correction isn't random in that case either. People exhibit universality in that they can create explanations about how the world works. The process of evolution does cannot. As such, bacteria is the result of non-explanatory knowledge. Neither have any guarantee of starting out as being correct. Both rely on error correction.critical rationalist
March 9, 2017
March
03
Mar
9
09
2017
01:12 PM
1
01
12
PM
PDT
Critical rationalist @149
CR: For the umpteenth time, you are ignoring that it is a process. Evolution does not suggest that any protein of today’s complexity was randomly generated all at once.
Nor do I suggest 'all at once'. So?
CR: Knowledge is information that plays a casual role in being retained when embedded in a storage medium.
You have yet to explain the existence of such a system.
CR: Genes that are better at being passed down to the next generation play that casual role.
The causal role of remaining what they are in the next generation?
CR: Selection represents error correction and genes represent knowledge. That is the non-random factor.
Aha ‘selection’ is the non-random factor. Well, no, ‘selection’ (read: elimination), instead of being ‘error correction’, is a severe hindrance to evolution. Perfectly viable organisms are offered by complete randomness — it’s a miracle! — and what does ‘selection’ do? It kills off the vast majority. Behold the alleged ‘creativity’ of “natural selection”! This whole theory is fake.
Darwin: Thus, from the war of nature, from famine and death, the most exalted object which we are capable of conceiving, namely, the production of the higher animals, directly follows.
Origenes
March 9, 2017
March
03
Mar
9
09
2017
10:04 AM
10
10
04
AM
PDT
UB, Having re-read some of your comments, perhaps you're referring to the reach of DNA and its leap to universality? IOW, are you're referring to the universality of computation?critical rationalist
March 9, 2017
March
03
Mar
9
09
2017
09:42 AM
9
09
42
AM
PDT
@Origenes For the umpteenth time, you are ignoring that it is a process. Evolution does not suggest that any protein of today's complexity was randomly generated all at once. Knowledge is information that plays a casual role in being retained when embedded in a storage medium. Genes that are better at being passed down to the next generation play that casual role. Selection represents error correction and genes represent knowledge. That is the non-random factor. Or, perhaps you have some other definition of random you would like to present, which you're referring to here? Agan, to be clear, I'm coming from a universal theory for the growth of knowledge. This includes the knowledge in books, brains and even genes are explainable and using the same umbrella theory. Nor does it assume that knowledge in specific spheres comes from authoritative sources. Knowledge genuinely grows and is created. On the other hand, my guess is that you disagree than any such unification is possible and that knowledge in some spheres does come from authoritative sources. The knowledge in question was merely copied from the "mind" of a designer that "just was", complete with that knowledge already present.critical rationalist
March 9, 2017
March
03
Mar
9
09
2017
09:21 AM
9
09
21
AM
PDT
critical rationalist @146
CR: Again in the process of evolution, variations are random to any specific problem to solve
You keep repeating that. I agree 100%. I hold that a proper understanding of evolutionary theory entails that anything creative is due to sheer dumb luck / randomness, so I do agree with you. I hold that variations are completely random to any specific problem to solve and completely random to any non-specific problem to solve. Variations are completely random —period.
CR: … not completely random.
For the umpteenth time: why not?
Complexity grows in a Piecemeal fashion.
And every evolutionary step is due to complete randomness. So tell me, what is the non-random factor here?Origenes
March 9, 2017
March
03
Mar
9
09
2017
08:11 AM
8
08
11
AM
PDT
@Eric Anderson
With due respect, this so-called “constructor theory” seems to add little of substance...
Can you be more specific than "seems to add little of substance"?
...and also appears to contain serious misunderstandings about the nature of both information and physical laws.
Which physical theory of information, which CT supposedly "misunderstands", are you referring to? Please be specific. As for being off topic, was UB incorrect when he said...
Also, the information in DNA (the topic of this conversation) doesn’t need to be “brought into fundamental physics” by “constructor theory”; it has been well-understood in terms of fundamental physics for a great number of years. Additionally, I don’t know why you introduced Shannon to the conversation.
I posted a link to a public article on Aeon, then posted links to published papers when a similar claim was made.critical rationalist
March 9, 2017
March
03
Mar
9
09
2017
08:07 AM
8
08
07
AM
PDT
@Origenes I wrote:
CR: Variation in the process of evolution is not completely random. This is because it’s a repeating process of variation and selection, not just variation on its own.
You wrote:
So, mutations (or ‘variations’ if you prefer) are not random because selection is not random? For clarity, let’s tease it apart: we have process A (mutation) and process B (selection). Now which of those is not random and why?
Let's ignore that it's a process? And my response was inapt? Again in the process of evolution, variations are random to any specific problem to solve, not completely random. What you seem to be implying is that an entire protein was created from scratch all at once from random variations. That's not evolutionary theory. Complexity grows in a Piecemeal fashion That clarification is sufficient to indicate that the goal variation plays is not goal oriented, yet not completely random.critical rationalist
March 9, 2017
March
03
Mar
9
09
2017
07:48 AM
7
07
48
AM
PDT
critical rationalist: With due respect, this so-called "constructor theory" seems to add little of substance and also appears to contain serious misunderstandings about the nature of both information and physical laws. Again, though, feel free to write up a brief exposition in your own words, with a few links to the key research in this area, and I'll elevate to a new thread for discussion, as it is largely OT here.Eric Anderson
March 9, 2017
March
03
Mar
9
09
2017
07:43 AM
7
07
43
AM
PDT
@UB,
CR, your theory has been thoroughly criticized. It presents nothing substantive about the most unique and important aspect of the system its being applied to. It doesn’t even mention it.
It doesn't? Then please point out which comment contains this criticism, in which the relationship between information an physics is not relevant.critical rationalist
March 9, 2017
March
03
Mar
9
09
2017
07:27 AM
7
07
27
AM
PDT
CR: Variation in the process of evolution is not completely random. This is because it’s a repeating process of variation and selection, not just variation on its own.
So, mutations (or ‘variations’ if you prefer) are not random because selection is not random? For clarity, let’s tease it apart: we have process A (mutation) and process B (selection). Now which of those is not random and why?
CR: Just as in the growth of human knowledge, guesses are not completely random.
Inapt comparison for reasons already provided.Origenes
March 9, 2017
March
03
Mar
9
09
2017
01:46 AM
1
01
46
AM
PDT
1 2 3 4 5 8

Leave a Reply