Uncommon Descent Serving The Intelligent Design Community

Of Pulsars and Pauses

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

DrREC is not just any Darwinist.  He holds a doctorate and has published on complex matters of biology in peer reviewed journals.  He is not stupid.  That’s why I like to use his examples in my posts.  I am not picking on a defenseless layman.  He’s among the Darwinists’ best and brightest.  So let’s get to his latest pronouncement from on high:

DrREC writes: 

Pulsars often have a complex behavior. But is it specified? If we took the pattern of pulses we detect as the ‘design specification’ — the pattern we search for, we would conclude yes. Totally and undeniably circular. Prove me wrong.

Here’s the problem with DrREC’s reasoning.  He seems to assume (despite being told the contrary numerous times), that any “pattern” can be designated post hoc as “specified.”  He does not seem to understand the most basic concepts of design theory.  The answer is that not any pattern can legitimately be called a specification. 

In a comment to my prior post Bruce David explains the concept nicely as follows:

Dembski’s work builds on that of earlier probability theorists’ who were wrestling with the problem that, for example, any pattern of heads and tails obtained by tossing a coin 100 times is equally improbable, yet intuitively, a pattern of 50 heads followed by 50 tails is in some sense far less probable than a ‘normal’ random pattern. In order to solve this conundrum, they came up with the idea of specification—if the pattern of heads and tails can be described independently of the actual pattern itself, then it is specified, and specified patterns can be said to be non-random. And note, the pattern does not have to be described ahead of time; the requirement is just that it is capable of being described independently of the actual pattern itself. In other words, a normal ‘random’ pattern can only be described by something equivalent to ‘the first toss was heads, the second heads, the third tails,’ and so on, whereas the example above is specified because it can be described as I already have, namely, ’50 heads followed by 50 tails’.

Back to DrREC’s question.  The pulses from the pulsar are indeed highly complex (i.e., improbable).  But they are never specified because they cannot be, as Bruce says, “described independently of the actual pattern itself.”  Therefore if we “took the pattern of pulses we detect as the ‘design specification'” even though that pattern could not be described independently of the actual pattern itself, we would simply be wrong.  That pattern does not conform to the definition of a specification. 

DrREC basically says, “If we call any pattern we find a “specification” then any pattern we find will be a “specification,” and that gets us nowhere.  Well, of course he is right as far as it goes.  But at a deeper and more meaningful level he is wrong, because no one says you can call just any pattern you find a specification.  The pattern must conform to a strict criterion before it can be considered a specification. 

So DrREC, I answered your question.  While we are on the issue of pulses you can answer mine.  Suppose researchers detect a repeating series of 1,126 pulses and pauses of unknown origin.  The pulses and pauses start like this (with one’s conforming to pulses and zero’s conforming to pauses):  110111011111011111110 . . .  After analyzing the series they determine that the zero’s are spaces between numbers and the one’s add up to numbers.  Thus, the excerpt I reproduced would be 2, 3, 5 and 7, the first four prime numbers.  The researchers suddenly realize that the 1,126 pulses and pauses represent the prime numbers between 1 and 100.  (Obviously, this was the series in the movie Contact).

My question for you DrREC is this:  Would you join Arch-atheist, uber-materialist, Darwinist Carl Sagan and conclude that this series is obviously designed by an intelligent agent?  If so, why?  After all, it is a hard fact that this series of 1,126 pulses and pauses is NO MORE IMPROBABLE than any other series of 1,126 pulses and pauses.

Comments
Funny everyone keeps coming with analogies where the design is not in question! It is almost as if they assume design, and proceed from there. Oh, right. The funny thing Eric can't do it tell us what the independent specifications for protein design are. But at any rate, there are a couple of things that went unanswered. Despite the attempts to distract with other analogies, my post at 9.3 clearly demonstrates IN PRACTICE, that fsci calculations narrowly and subjectively define a design as part of estimating functional space.DrREC
December 16, 2011
December
12
Dec
16
16
2011
02:15 PM
2
02
15
PM
PDT
Analogies are of no use in the example I am using, because a key claim I am making is that protein folding is not predictable from a coding sequence.
I didn't make an analogy. I used an example against which your claims could be directly tested. You don't seem to dispute that what you asserted as a rule of logic was false, so I guess we've moved on from that. "Unpredictable" is synonymous with "random." Are you saying that coding sequences result in random protein folds? I'm pretty sure every second of our lives depends upon the predictability of protein folds from coding sequences. How can you say that they are unpredictable?ScottAndrews2
December 16, 2011
December
12
Dec
16
16
2011
01:54 PM
1
01
54
PM
PDT
Petrushka: "But at other times the analogy switches to objects like motors . . ." No-one is switching analogies because it is convenient. A digital code is an example of complex specified information. An integrated functional system is an example of complex specified information. There are lots of examples. No-one is switching anything. BTW, analogies are useful and there is nothing wrong with them as far as they go in helping us think through things. But in this case we don't even have to analogize. The code in DNA is a digital code, it isn't just like a digital code. Molecular motors in living cells aren't just like motors, they are motors.Eric Anderson
December 16, 2011
December
12
Dec
16
16
2011
01:37 PM
1
01
37
PM
PDT
Analogies are of no use in the example I am using, because a key claim I am making is that protein folding is not predictable from a coding sequence. I am making this claim because decades of research have revealed no shortcuts and no glimpse of a shortcut. And unlike will-o-the-wisps, like the Higgs particles, there is no theoretical reason to expect a shortcut. This is a very narrow and specific claim, that could be proven wrong by a single counterexample. It is relevant to the calculation of "information" in the genome because you have no way of determining the information content of a sequence if you can't account for the necessity of each character in the sequence string. I assert you have no theory of sequence formation or sequence utility or syntax that would enable design. Regardless of asserted probabilities, the only reasonable way to build a protein or protein domain is to try modifications of existing sequences. There are theoretical ways to do it atom by atom, but chemistry is billions of times faster. My claim is that chemistry will always be faster than computation, and evolution is the fastest and most efficient way to navigate through functional sequence space.Petrushka
December 16, 2011
December
12
Dec
16
16
2011
01:36 PM
1
01
36
PM
PDT
P, This is ever so inadvertently revealing:
I find it interesting that when it seems convenient to ID, the code is digital (and subject to being assembled by incremental accumulation). But at other times the analogy switches to objects like motors that are not digitally coded and do not reproduce with variation.
1 --> Codes, generally speaking, use symbolic representations [whereby one thing maps to another and per a convention MEANS that], and are inherently discrete state, i.e. digital. 2 --> The DNA-> RNA --> Ribosome --> AA chain for protein system uses just such symbols, and goes through transcription, processing that allows reordering, and translation in a translation device that is also a manufacturing unit for proteins. 3 --> the fact that you find yourself resisting such patent and basic facts is revealing. 4 --> A motor is a functional, composite entity. It is made up from parts, that fit together in a certain specific way, per an exploded view wiring diagram, and when they fit together they do a job. 5 --> As has been pointed out for a long time now, that sort of 3-D exploded view can be converted into a cluster of linked strings that express the implied information, as in Autocad etc. 6 --> However, the point of a motor, is that it does a certain job, converting energy into shaft work, often but not always in rotary form. (Linear motors exist and are important.) 7 --> A lot of was and means can be used to generate the torque [power = torque * speed], but rotary motors generally have a shaft that carries the load, on the output port, and an energy converter on the input port. (Motors are classic two-port devices.) 8 --> Electrical motors work off the Lorentz force [which in turn is in large part a reflection of relativistic effects of the Coulomb force], hydraulic and pneumatic ones, off fluid flows, some motors work off expanding combustion products, etc etc. 9 --> Two of the motors in living forms seem to work off ion flows and associated barrier potentials. Quite efficiently and effectively too. 10 --> Wiki, testifying against known ideological interest:
An engine or motor is a machine designed to convert energy into useful mechanical motion.[1][2] Heat engines, including internal combustion engines and external combustion engines (such as steam engines) burn a fuel to create heat which is then used to create motion. Electric motors convert electrical energy in mechanical motion, pneumatic motors use compressed air and others, such as wind-up toys use elastic energy. In biological systems, molecular motors like myosins in muscles use chemical energy to create motion.
11 --> In short, we see here a recognition of what you are so desperate to resist: there are chemically powered, molecular scale motors in the body, here citing a linear motor, that makes muscles work. 12 --> So, there is no reason why we should suddenly cry "analogy" -- shudder -- when we see similar, ion powered rotary motors in the living cell. ______________ In short, the strained objections we are seeing are telling us a lot, and not to your benefit. GEM of TKIkairosfocus
December 16, 2011
December
12
Dec
16
16
2011
12:54 PM
12
12
54
PM
PDT
The interesting thing is there are so many different versions of flagella, and so many genomes containing bits and pieces of the code, used for so many different purposes. There are at least 20 different species of microbes having subunits of the flagellum code
So what? Do you not realize that in the macro world there are single component parts that are used in a multitude of disparate systems, nuts & bolts being the most obvious example. In fact, a good engineer strives to make the hardware from system to system as standard as possible. The more variation there is in the hardware, the more headaches it causes. Copper wiring is another example, with wiring of the same gage and shielding used all over the place. Standard circuit cards, standard housings for gear boxes, standard junctions, standard belts, and the list goes on forever. Standard components is just as much a sign of design as anything, friend.
I find it interesting that when it seems convenient to ID, the code is digital. But at other times the analogy switches to objects like motors that are not digitally coded and do not reproduce with variation
What’s your point here? Are you suggesting that the flagellum is not a motor because its components are constructed of discrete modular building blocks? If so, that’s asinine. And here’s some news for you: motors designed by humans in the macro world are reproduced also. Weird, huh? And what if the flagellum varies over time; does that plasticity make it not a motor? Nope. Still a motor.M. Holcumbrink
December 16, 2011
December
12
Dec
16
16
2011
12:10 PM
12
12
10
PM
PDT
Let's make it more even by removing your specification and hiding the source of the information. Keep in mind that this is an illustration, not an analogy. Your assertions can be applied directly to it. You describe a series of physical symptoms to a man. He then speaks in a language you do not understand to second man who specifically treats all but one of your symptoms. The treatment was specific. But who specified it? Not you. You didn't know about that treatment. Was it the man you spoke to, or the man he translated to? He spoke another language. You don't know what he said. It could have been a) random, b) irrelevant, c) a translation of your symptoms, d) instructions for treatment, or a degraded version of c or d, or a combination of any of the above. Maybe he described one symptom, gave instructions for treating another symptom, got one wrong, and asked what the other guy wanted for lunch. You have no idea what the content of that sequence was or if it was degraded, and yet it evidently contained functional content. You are able to determine it by the output, which you could not have specified because it applied medical knowledge you did not have. Compare that to this:
You cannot distinguish a degraded functional sequence from a random sequence …and… If you cannot distinguish a degraded functional sequence from a random sequence, you cannot assign bits of information to it.
It's not like I spent a few hours brainstorming these scenarios. They were easy. And they demonstrate clearly that what you are asserting is false.ScottAndrews2
December 16, 2011
December
12
Dec
16
16
2011
10:01 AM
10
10
01
AM
PDT
The effect was described in the illustration.
If you give instructions to a man and he translates them in language you don’t understand to a second man who then follows most but gets a few things wrong, you cannot ascribe any functional information to those translated instructions because you don’t know which part was degraded.
I've deliberately put this side-by-side with your statements. Your statements are simple, as is the scenario.
You cannot distinguish a degraded functional sequence from a random sequence …and… If you cannot distinguish a degraded functional sequence from a random sequence, you cannot assign bits of information to it.
In the scenario above, the sequence is clearly functional although apart from the functional effect you cannot distinguish it from a random sequence, a specified sequence, or a degraded specified sequence.ScottAndrews2
December 16, 2011
December
12
Dec
16
16
2011
09:39 AM
9
09
39
AM
PDT
and yet you can determine from the effect alone that the sequence does contain functional information.
What effect? Tell us how to distinguish a functionless sequence that is one character from being functional from a population of random sequences.Petrushka
December 16, 2011
December
12
Dec
16
16
2011
08:31 AM
8
08
31
AM
PDT
Petrushka,
You are simply wrong about this. I asked if you could distinguish a coding sequence that has been degraded by one character change, from a population of randomly generated sequences. I make only one stipulation, that the sequence must not match any known functional sequences. It must be unique.
I am right (and simply.) If someone speaks in a language you do not understand, you cannot distinguish any of what is said from random noise. Therefore you cannot tell from the sequence itself whether it is entirely random and functionless, functional and perfect, or functional and degraded. You stated
You cannot distinguish a degraded functional sequence from a random sequence ...and... If you cannot distinguish a degraded functional sequence from a random sequence, you cannot assign bits of information to it.
This is obviously false because in this scenario, you cannot distinguish a degraded functional sequence from a random sequence, and yet you can determine from the effect alone that the sequence does contain functional information. I'm quoting your words repeatedly because they are simple and clear, and I am placing them alongside a simple, clear scenario which demonstrates they they are wrong.ScottAndrews2
December 16, 2011
December
12
Dec
16
16
2011
08:16 AM
8
08
16
AM
PDT
The interesting thing is there are so many different versions of flagella, and so many genomes containing bits and pieces of the code, used for so many different purposes. There are at least 20 different species of microbes having subunits of the flagellum code. I find it interesting that when it seems convenient to ID, the code is digital (and subject to being assembled by incremental accumulation). But at other times the analogy switches to objects like motors that are not digitally coded and do not reproduce with variation.Petrushka
December 16, 2011
December
12
Dec
16
16
2011
08:00 AM
8
08
00
AM
PDT
DrREC:
Seriously, do you think some explorer could wander up on Easter Island, and say those look natural? Or would the knowledge of statutes in human design be sufficient?
Excellent. So finally we get nearer to the heart of the matter. DrREC acknowledges that we don't have to know the exact specification we are looking for. It is enough to have seen some similar systems. In other words, we look at a system of unknown origin and analogize to systems that we do know. This is one important aspect (though not complete) of the way we draw design inferences. We work from what we know, not from what we don't know. We work from our understanding of cause and effect in the world, not from what we don't know. And with those tools under our belt, we consistently and regularly infer design as the most appropriate explanation, even when we don't know the exact specification we will find. DrREC has no issue with this approach. He thinks it is perfectly reasonable and appropriate. He even suggests above that it is absurd to think otherwise. All correct.Eric Anderson
December 16, 2011
December
12
Dec
16
16
2011
07:39 AM
7
07
39
AM
PDT
Without understanding the language at all, you can tell that it is not random because it conveys at least some of your specifications, even if the content is degraded.
You are simply wrong about this. I asked if you could distinguish a coding sequence that has been degraded by one character change, from a population of randomly generated sequences. I make only one stipulation, that the sequence must not match any known functional sequences. It must be unique. The answer is that you can't. When designing a completely new coding sequence, you cannot tell when you are 50 percent done, or 90 percent done, or 99.99 percent done. That is not true of syntactical languages. If you are writing a computer program or a sonnet, you can distinguish progress. A partial computer program will do something, even if it is nothing but a goto statement. A sentence fragment is distinguishable from random characters. a sntnce wit speling erors an grammer mistackes can be distingushd form gbbirsh.Petrushka
December 16, 2011
December
12
Dec
16
16
2011
07:38 AM
7
07
38
AM
PDT
Molecular ‘motors’ are an analogy drawn to human design. Now you’ve taken the analogy too far
Take ATP synthase or the flagellum: these molecular motors are composed of simple machines, e.g. wheels & axles (free turning rotor which is constrained in 5 degrees of freedom by a stator imbedded in the membrane), ramps (which transform linear momentum to rotational momentum due to a flow of ions), levers (clutch mechanism to reverse direction of rotation), and screws (as the filament turns it acts as a propeller). Any machine designed and built in the macro world contains some or all of these simple machines. And please note the purpose of such is to transform one form of energy into another. In the case of ATP synthase and the flagellum, the energy of a proton gradient is converted into torque, which is used to generate chemical energy (ATP) and linear motion, respectively. Motors are a physical mechanism by which a form of potential energy is channeled and converted into a form of *useful* energy. This is exactly what we see in the cell, which means we are not speaking in analogies here. These are actual motors, in every sense of the term. Now I would ask you: do you avoid calling these things “motors” in an effort to avoid the clear, purposeful design implications, or because you are fundamentally ignorant of what motors actually are?M. Holcumbrink
December 16, 2011
December
12
Dec
16
16
2011
05:53 AM
5
05
53
AM
PDT
DrREC, If you don't like analogies nor the design inference all YOU have to do is actually step-up and demonstrate that stochastic processes can account for what we say is designed. OR you can continue whining. Your choice...Joe
December 16, 2011
December
12
Dec
16
16
2011
04:06 AM
4
04
06
AM
PDT
OOPS: The discussion on the galvanometer and that on the infinite monkeys analysis are actually to be found here, no 11 in the ID foundations series, no 12 is on Paley-style self replication as an additional capacity of a functional entity per the von Neumann kinematic self-replicator. [Do, watch the vid!]kairosfocus
December 16, 2011
December
12
Dec
16
16
2011
01:47 AM
1
01
47
AM
PDT
Dr Rec: That's patently below the belt, you full well know that ever before the point you clip and strawmannised, the case was presented above, and that this case has long been laid out over and over again in details elsewhere. (E.g., onlookers, cf here at UD recently and here on in context.) You also know full well that I am pointing out that posts in this thread are illustrative cases on point on the empirical reliability of the point I made in brief. So, your strawman is a willful misrepresentation. And, a red herring distraction from the evidence that points out that he expression is indeed empirically reliable, and pivots analytically on the issue of the infinite monkeys/needle in the haystack analysis, most recently explored again at UD in the post linked above. Onlookers will note that the sort of distraction is itself evidence of want of a sound response on your part. But, just in case you want something specific to this thread, I point you here. GEM of TKIkairosfocus
December 16, 2011
December
12
Dec
16
16
2011
01:37 AM
1
01
37
AM
PDT
Sorry, out of place above . . . Namely, we have every reason to see why complex, integrated functionality on many interacting parts naturally leads to islands of functional configurations in much wider spaces of possible but overwhelmingly non-functional configurations. (And, this thought exercise will rivet the point home, in a context that is closely tied to the statistical underpinnings of the second law of thermodynamics.) Clearly, it is those who imply or assume that we have instead a vast continent of function that can be traversed incrementally step by step starting form simple beginnings that credibly get us to a metabolising, self-replicating organism, who have to empirically show their claims. It will come as no surprise to the reasonably informed that the original of cell based life bit is neatly snipped out of the root of the tree of life, precisely because after 150 years or so of speculations on Darwin's warm little pond full of chemicals and struck by lightning, etc, the field of study is in crisis. Similarly, the astute onlooker will know that he general pattern of the fossil record and of today's life forms, is that of sudden appearance, stasis, sudden disappearances and gaps, not at all the smoothly graded overall tree of life as imagined. Evidence of small scale adaptations within existing body plans has been grossly extrapolated and improperly headlined as proof of what is in fact the product of an imposed philosophical a priori, evolutionary materialism. That is why Philip Johnson's retort to Lewontin et al was so cuttingly, stingingly apt:
For scientific materialists the materialism comes first; the science comes thereafter. [[Emphasis original] We might more accurately term them "materialists employing science." And if materialism is true, then some materialistic theory of evolution has to be true simply as a matter of logical deduction, regardless of the evidence. That theory will necessarily be at least roughly like neo-Darwinism, in that it will have to involve some combination of random changes and law-like processes capable of producing complicated organisms that (in Dawkins’ words) "give the appearance of having been designed for a purpose." . . . . The debate about creation and evolution is not deadlocked . . . Biblical literalism is not the issue. [--> those who are currently spinning toxic, atmposphere poisoning, ad homiem laced talking points about "sermons" and "preaching" and "preachers" need to pay particular heed to this . . . ] The issue is whether materialism and rationality are the same thing. Darwinism is based on an a priori commitment to materialism, not on a philosophically neutral assessment of the evidence. Separate the philosophy from the science, and the proud tower collapses. [[Emphasis added.] [[The Unraveling of Scientific Materialism, First Things, 77 (Nov. 1997), pp. 22 – 25.]
So, where does this leave the little equation accused of being question-begging: Chi_500 = I*S - 500, bits beyond the solar system threshold 1 --> The Hartley-Shannon information metric is a standard measure of info carrying capacity, here being extended to cover a case were we must meet some specificaitons, and pass a threshold of complexity. 2 --> the 500 bit threshold is sufficient to isolate the full Planck Time Quantum State [PTQS] search capacity of our solar system's 10^57 atoms, 10^102 states in 10^17 or so seconds, to ~ 1 in 10^48 of the set of possibilities for 500 bits: 3 * 10^150. 3 --> So, before we get any further, we know that we are looking at so tiny a fractional sample that (on well-established sampling theory) ANYTHING that is not typical of the vast bulk of the distribution is utterly unlikely to be detected by a blind process. 4 --> The comparison to make this familiar is, to draw at chance or at chance plus mechanical necessity, a blind sample of size of one straw from a cubical hay-bale 3 1/2 light days across, which could have our solar system out to Pluto in it [about 1/10 the way across]. With maximal probability -- all but certainty, such a sample will pick up straw. 5 --> The threshold of complexity, in short is reasonable, and if you want to challenge the solar system (our practical universe which is 98% dominated by our Sun, in which no OOL is even possible . . . ) then scale up to the observed cosmos as a whole, 1,000 bits. (The calculation for THAT hay bale would have millions of cosmi comparable to our own lurking within and we would have the same result.) 6 --> So, the only term really up for challenge is S, the dummy variable that is set to 0 as default, and if we have positive, objective reason to infer functional specificity or more broadly ability to assign observed cases E to a narrow zone T that can be INDEPENDENTLY described (i.e. the selection of T is non-arbitrary, we have a definable collection in the set theory sense and a set builder rule -- or at least, a separate objective criterion for inclusion/exclusion) then it can be set to 1. 7 --> The S = 0 case, the default, is of course the blind chance plus necessity case. The assumption is that phenomena are normally accessible by chance plus necessity acting on matter and energy in space and time. 8 --> But, in light of the sort of issues discussed above (and over and over again elsewhere over the course of years . . . ), it is recognised that certain phenomena, especially FSCI and in particular dFSCI -- like the posts in our thread -- are in fact only reasonably accessible by intelligent direction on the gamut of our solar system or observed cosmos. 9 --> Without loss of general force, we may focus on functional specificity. We can objectively, observationally identify this, and routinely do so. 10 --> So, what the equation ends up doing is to give us an empirically testable threshold for when something is functionally specific, information-bearing and sufficiently complex that it may be inferred that it is best explained on design, not chance plus necessity. 11 --> Since this is specific and empirically testable, it cannot be a mere begging of the question, it is inviting refutation by the simple expedient of showing how chance and necessity without intelligent guidance or starting within an island of function already -- that is what Genetic Algorithms do, as the infamous well-behaved fitness function so plainly shows -- can give rise to FSCI. 12 --> The truth is that the talking point storm and assertions about not sufficiently rigorous definitions, etc etc etc, are all because the expression handily passes empirical tests. the entire Internet is a case in point, if you want empirical tests. 13 --> So, if this were a world in which science were done by machines programmed to be objective, the debate would long since have been over as soon as this expression and the underlying analysis were put on the table. 14 --> But, humans are not machines, and so recently the debate talking point storm has been on how this eqn is begging questions or is not sufficiently defined to suit the tastes of those committed to a priori evolutionary materialism, or how GA's -- which start inside islands of function! -- show how FSCI can be had without paying for it with the hard cash of intelligence. (I won't bother with more than mentioning the sort of hostile, hateful attack that was so plainly triggered by our blowing the MG sock-puppet campaign out of the water. Cf link here for the blow by blow on how that campaign failed.) 15 --> To all this, I simply say, the expression invites empirical test and has billions of confirmatory instances. Kindly show us a clear case that -- without starting on an existing island of function -- shows how FSCI, especially dFSCI (at least 500 - 1,000 bits), emerges credibly by chance and necessity, within the scope of available empirical resources. 16 --> For those not familiar with the underlying principle, I am saying that the expression is analytically warranted per a reasonable model and is directly subject to empirical test with a remarkable known degree of success, and so far no good counter-examples. So, we are inductively warranted to trust it absent convincing counter-example. 17 --> Not as question begging a prioris, but as per the standard practice of science where laws of science and scientific models are provisionally warranted and empirically reliable, not necessarily true beyond all possibility of dispute. 18 --> Indeed, that is why the laws of thermodynamics can be formulated in terms that perpetual motion machines of the first, second and third kind will not work. So far quite empirically reliable, and on reasonable models, we can see why. But, provide such a perpetual motion machine and thermodynamics would collapse. ____________ So, Dr Rec, a fill in the blanks exercise:
your empirical counter-example per actual observation is CCCCCCC, and your analytical explanation for it is WWWWWWW
If you cannot directly fill in the blanks, we have every reason to accept the Chi_500 expression on the normal terms for accepting a scientific result, no matter how uncomfortable this is for the a priori materialists. GEM of TKIkairosfocus
December 16, 2011
December
12
Dec
16
16
2011
01:22 AM
1
01
22
AM
PDT
Namely, we have every reason to see why complex, integrated functionality on many interacting parts naturally leads to islands of functional configurations in much wider spaces of possible but overwhelmingly non-functional configurations. (And, this thought exercise will rivet the point home, in a context that is closely tied to the statistical underpinnings of the second law of thermodynamics.) Clearly, it is those who imply or assume that we have instead a vast continent of function that can be traversed incrementally step by step starting form simple beginnings that credibly get us to a metabolising, self-replicating organism, who have to empirically show their claims. It will come as no surprise to the reasonably informed that the original of cell based life bit is neatly snipped out of the root of the tree of life, precisely because after 150 years or so of speculations on Darwin's warm little pond full of chemicals and struck by lightning, etc, the field of study is in crisis. Similarly, the astute onlooker will know that he general pattern of the fossil record and of today's life forms, is that of sudden appearance, stasis, sudden disappearances and gaps, not at all the smoothly graded overall tree of life as imagined. Evidence of small scale adaptations within existing body plans has been grossly extrapolated and improperly headlined as proof of what is in fact the product of an imposed philosophical a priori, evolutionary materialism. That is why Philip Johnson's retort to Lewontin et al was so cuttingly, stingingly apt:
For scientific materialists the materialism comes first; the science comes thereafter. [[Emphasis original] We might more accurately term them "materialists employing science." And if materialism is true, then some materialistic theory of evolution has to be true simply as a matter of logical deduction, regardless of the evidence. That theory will necessarily be at least roughly like neo-Darwinism, in that it will have to involve some combination of random changes and law-like processes capable of producing complicated organisms that (in Dawkins’ words) "give the appearance of having been designed for a purpose." . . . . The debate about creation and evolution is not deadlocked . . . Biblical literalism is not the issue. [--> those who are currently spinning toxic, atmposphere poisoning, ad homiem laced talking points about "sermons" and "preaching" and "preachers" need to pay particular heed to this . . . ] The issue is whether materialism and rationality are the same thing. Darwinism is based on an a priori commitment to materialism, not on a philosophically neutral assessment of the evidence. Separate the philosophy from the science, and the proud tower collapses. [[Emphasis added.] [[The Unraveling of Scientific Materialism, First Things, 77 (Nov. 1997), pp. 22 – 25.]
So, where does this leave the little equation accused of being question-begging: Chi_500 = I*S - 500, bits beyond the solar system threshold 1 --> The Hartley-Shannon information metric is a standard measure of info carrying capacity, here being extended to cover a case were we must meet some specificaitons, and pass a threshold of complexity. 2 --> the 500 bit threshold is sufficient to isolate the full Planck Time Quantum State [PTQS] search capacity of our solar system's 10^57 atoms, 10^102 states in 10^17 or so seconds, to ~ 1 in 10^48 of the set of possibilities for 500 bits: 3 * 10^150. 3 --> So, before we get any further, we know that we are looking at so tiny a fractional sample that (on well-established sampling theory) ANYTHING that is not typical of the vast bulk of the distribution is utterly unlikely to be detected by a blind process. 4 --> The comparison to make this familiar is, to draw at chance or at chance plus mechanical necessity, a blind sample of size of one straw from a cubical hay-bale 3 1/2 light days across, which could have our solar system out to Pluto in it [about 1/10 the way across]. With maximal probability -- all but certainty, such a sample will pick up straw. 5 --> The threshold of complexity, in short is reasonable, and if you want to challenge the solar system (our practical universe which is 98% dominated by our Sun, in which no OOL is even possible . . . ) then scale up to the observed cosmos as a whole, 1,000 bits. (The calculation for THAT hay bale would have millions of cosmi comparable to our own lurking within and we would have the same result.) 6 --> So, the only term really up for challenge is S, the dummy variable that is set to 0 as default, and if we have positive, objective reason to infer functional specificity or more broadly ability to assign observed cases E to a narrow zone T that can be INDEPENDENTLY described (i.e. the selection of T is non-arbitrary, we have a definable collection in the set theory sense and a set builder rule -- or at least, a separate objective criterion for inclusion/exclusion) then it can be set to 1. 7 --> The S = 0 case, the default, is of course the blind chance plus necessity case. The assumption is that phenomena are normally accessible by chance plus necessity acting on matter and energy in space and time. 8 --> But, in light of the sort of issues discussed above (and over and over again elsewhere over the course of years . . . ), it is recognised that certain phenomena, especially FSCI and in particular dFSCI -- like the posts in our thread -- are in fact only reasonably accessible by intelligent direction on the gamut of our solar system or observed cosmos. 9 --> Without loss of general force, we may focus on functional specificity. We can objectively, observationally identify this, and routinely do so. 10 --> So, what the equation ends up doing is to give us an empirically testable threshold for when something is functionally specific, information-bearing and sufficiently complex that it may be inferred that it is best explained on design, not chance plus necessity. 11 --> Since this is specific and empirically testable, it cannot be a mere begging of the question, it is inviting refutation by the simple expedient of showing how chance and necessity without intelligent guidance or starting within an island of function already -- that is what Genetic Algorithms do, as the infamous well-behaved fitness function so plainly shows -- can give rise to FSCI. 12 --> The truth is that the talking point storm and assertions about not sufficiently rigorous definitions, etc etc etc, are all because the expression handily passes empirical tests. the entire Internet is a case in point, if you want empirical tests. 13 --> So, if this were a world in which science were done by machines programmed to be objective, the debate would long since have been over as soon as this expression and the underlying analysis were put on the table. 14 --> But, humans are not machines, and so recently the debate talking point storm has been on how this eqn is begging questions or is not sufficiently defined to suit the tastes of those committed to a priori evolutionary materialism, or how GA's -- which start inside islands of function! -- show how FSCI can be had without paying for it with the hard cash of intelligence. (I won't bother with more than mentioning the sort of hostile, hateful attack that was so plainly triggered by our blowing the MG sock-puppet campaign out of the water. Cf link here for the blow by blow on how that campaign failed.) 15 --> To all this, I simply say, the expression invites empirical test and has billions of confirmatory instances. Kindly show us a clear case that -- without starting on an existing island of function -- shows how FSCI, especially dFSCI (at least 500 - 1,000 bits), emerges credibly by chance and necessity, within the scope of available empirical resources. 16 --> For those not familiar with the underlying principle, I am saying that the expression is analytically warranted per a reasonable model and is directly subject to empirical test with a remarkable known degree of success, and so far no good counter-examples. So, we are inductively warranted to trust it absent convincing counter-example. 17 --> Not as question begging a prioris, but as per the standard practice of science where laws of science and scientific models are provisionally warranted and empirically reliable, not necessarily true beyond all possibility of dispute. 18 --> Indeed, that is why the laws of thermodynamics can be formulated in terms that perpetual motion machines of the first, second and third kind will not work. So far quite empirically reliable, and on reasonable models, we can see why. But, provide such a perpetual motion machine and thermodynamics would collapse. ____________ So, Dr Rec, a fill in the blanks exercise:
your empirical counter-example per actual observation is CCCCCCC, and your analytical explanation for it is WWWWWWW
If you cannot directly fill in the blanks, we have every reason to accept the Chi_500 expression on the normal terms for accepting a scientific result, no matter how uncomfortable this is for the a priori materialists. GEM of TKIkairosfocus
December 16, 2011
December
12
Dec
16
16
2011
01:20 AM
1
01
20
AM
PDT
Dr Rec: Pardon, but do you understand the difference between an observation and an assumption? Let's take a microcontroller object program for an example. Can you see whether the controlled device with the embedded system works? Whether it works reliably, or whether it works partially? Whether it has bugs -- i.e. we can find circumstances under which it behaves unexpectedly in non-functional ways, or fails? Can you see that we can here recognise that something is functional, and may even be able to construct some sort of metric of the degree of functionality? Now, we observe that the microcontroller depends on certain stored strings of binary digits, and that when some are disturbed by injecting random changes it keeps on working, but beyond a certain threshold, key functions or even overall function break down. This identifies empirically that we are in an island of function. [As a live case in point, here at UD, last week I had the experience of discovering a "feature" of WP, i.e. if you happen to try square brackets -- like I am using here -- in a caption for a photo the post display process will fail to complete and posting of the original post, but not comments, will abort. I suspect that's because square brackets are used for certain functional tasks and I happened to half-trigger some such task, leading to an abort.] Do you now appreciate that we can empirically detect FSCI, and in particular, digitally coded FSCI? Do you in particular see that the concept of islands of function shaped by the constraints on -- in this case -- strings of algorithmically functional data elements, naturally leads to the islands of function effect? That, where we see functional constraints in a context of complex function, this is exactly what we should EXPECT? For, parts have to fit into a context of a Wicken-type "wiring diagram" for the whole to work, and absent the complex, mutually adapted set of elements wired on that diagram for that case, the system will wholly or partly degrade. That is, we see here the significance of functionally specific, integrated complex organisation. It is a commonplace of the technology of complex, multi-part, functionally integrated, organised systems, that function depends on fairly specific organisation, with a bit of room for tolerance, but not very much relative to the space of configurational possibilities of a set of components. And, we may extend this fairly simply to the case where there are no explicit strings, by taking the functional diagram apart on an exploded view, and reducing the information of that 3-D representation and putting it in a data structure based on ordered, linked strings. That is what Autocad etc do. And of course the assembly process is generally based on such an exploded view model. (Assembly of a complex functional system based on a great many parts with inevitable tolerances is in itself a complex issue, riddled with the implications of tolerances of the many components. Don't forget the cases in the 1950's where it was discovered that just putting a bolt in the wrong way on I think it was the F 86, could cause fatal crashes. Design for one-off success is much less complex than design for mass production. And, when we add in the issue in biology of SELF-assembly, that problem goes through the roof!) In short, we can see how FSCO, FSCI, and irreducible complexity emerge naturally as concepts summarising a world of knowledge about complex multi-part systems. These things are not made-up, they are instantly recognisable and understandable to anyone who has had to struggle with designing and building or simply troubleshooting and fixing complex multi-part functional systems. BTW, this is why I can only shake my head when I hear talking points over Hoyle's fallacy, when he posed the challenge of assembling a jumbo jet by passing a tornado through a junkyard. Actually -- and as I discussed recently here in the ID foundations series (notice the diagram of the instrument), we may take out the rhetorical flourish and focus on the challenge of assembling a D'Arsonval galvanometer movement based instrument in its cockpit. Or even the challenge of screwing together the right nut and bolt in a bowl of mixed parts, by random agitation. And, BTW, the just linked shows how Paley long since highlighted the problem with the dismissive "analogy" argument, when in Ch 2 of his work, he pointed out the challenge of building a self-replicating watch:
Suppose, in the next place, that the person who found the watch should after some time discover that, in addition to all the properties which he had hitherto observed in it, it possessed the unexpected property of producing in the course of its movement another watch like itself – the thing is conceivable; that it contained within it a mechanism, a system of parts — a mold, for instance, or a complex adjustment of lathes, baffles, and other tools — evidently and separately calculated for this purpose . . . . The first effect would be to increase his admiration of the contrivance, and his conviction of the consummate skill of the contriver. Whether he regarded the object of the contrivance, the distinct apparatus, the intricate, yet in many parts intelligible mechanism by which it was carried on, he would perceive in this new observation nothing but an additional reason for doing what he had already done — for referring the construction of the watch to design and to supreme art . . . . He would reflect, that though the watch before him were, in some sense, the maker of the watch, which, was fabricated in the course of its movements, yet it was in a very different sense from that in which a carpenter, for instance, is the maker of a chair — the author of its contrivance, the cause of the relation of its parts to their use. [Emphases added. (Note: It is easy to rhetorically dismiss this argument because of the context: a work of natural theology. But, since (i) valid science can be -- and has been -- done by theologians; since (ii) the greatest of all modern scientific books (Newton's Principia) contains the General Scholium which is an essay in just such natural theology; and since (iii) an argument's weight depends on its merits, we should not yield to such “label and dismiss” tactics. It is also worth noting Newton's remarks that “thus much concerning God; to discourse of whom from the appearances of things, does certainly belong to Natural Philosophy [i.e. what we now call “science”].” )]
In short, the additionality of self replication of a functioning system is already a challenge. And Paley was of course too early by over a century to know what von Neumann worked out on his kinematic self-replicator that uses digitally stored information in a string structure to control self assembly and self replication. (Also discussed in the just linked onlookers.) On the strength of these and related considerations, I then look at say Denton's description (please watch the vid tour then read) of the automated multi-part functionality of the living cell:
To grasp the reality of life as it has been revealed by molecular biology, we must magnify a cell a thousand million times until it is twenty kilometers in diameter [[so each atom in it would be “the size of a tennis ball”] and resembles a giant airship large enough to cover a great city like London or New York. What we would then see would be an object of unparalleled complexity and adaptive design. On the surface of the cell we would see millions of openings, like the port holes of a vast space ship, opening and closing to allow a continual stream of materials to flow in and out. If we were to enter one of these openings we would find ourselves in a world of supreme technology and bewildering complexity. We would see endless highly organized corridors and conduits branching in every direction away from the perimeter of the cell, some leading to the central memory bank in the nucleus and others to assembly plants and processing units. The nucleus itself would be a vast spherical chamber more than a kilometer in diameter, resembling a geodesic dome inside of which we would see, all neatly stacked together in ordered arrays, the miles of coiled chains of the DNA molecules. A huge range of products and raw materials would shuttle along all the manifold conduits in a highly ordered fashion to and from all the various assembly plants in the outer regions of the cell. We would wonder at the level of control implicit in the movement of so many objects down so many seemingly endless conduits, all in perfect unison. We would see all around us, in every direction we looked, all sorts of robot-like machines . . . . We would see that nearly every feature of our own advanced machines had its analogue in the cell: artificial languages and their decoding systems, memory banks for information storage and retrieval, elegant control systems regulating the automated assembly of components, error fail-safe and proof-reading devices used for quality control, assembly processes involving the principle of prefabrication and modular construction . . . . However, it would be a factory which would have one capacity not equaled in any of our own most advanced machines, for it would be capable of replicating its entire structure within a matter of a few hours . . . . Unlike our own pseudo-automated assembly plants, where external controls are being continually applied, the cell's manufacturing capability is entirely self-regulated . . . . [[Denton, Michael, Evolution: A Theory in Crisis, Adler, 1986, pp. 327 – 331. This work is a classic that is still well worth reading. Emphases added. (NB: The 2009 work by Stephen Meyer of Discovery Institute, Signature in the Cell, brings this classic argument up to date. The main thesis of the book is that: "The universe is comprised of matter, energy, and the information that gives order [[better: functional organisation] to matter and energy, thereby bringing life into being. In the cell, information is carried by DNA, which functions like a software program. The signature in the cell is that of the master programmer of life." Given the sharp response that has provoked, the onward e-book responses to attempted rebuttals, Signature of Controversy, would also be excellent, but sobering and sometimes saddening, reading.) ]
We could go on and on, but by now the point should be quite clear to all but the deeply indoctrinated. [ . . . ]kairosfocus
December 16, 2011
December
12
Dec
16
16
2011
01:19 AM
1
01
19
AM
PDT
Oh? They look like motors, they function like motors, they have the same types of parts that motors have, BUT.... they aren't motors because we know motors are designed. GOT IT! I see 4 definitions at dictionary.com that molecular motors fit.John D
December 16, 2011
December
12
Dec
16
16
2011
12:07 AM
12
12
07
AM
PDT
Yeah-I said "pre-established or independently known " Or has meaning. All your examples are pre-established-they conform to expectation of human design or codes, and distinguishing nature from design isn't in question. Seriously, do you think some explorer could wander up on Easter Island, and say those look natural? Or would the knowledge of statutes in human design be sufficient?DrREC
December 15, 2011
December
12
Dec
15
15
2011
08:54 PM
8
08
54
PM
PDT
"And we recognized that pattern after we saw it," Because it conforms to a preset specification-the English language.DrREC
December 15, 2011
December
12
Dec
15
15
2011
08:48 PM
8
08
48
PM
PDT
Molecular 'motors' are an analogy drawn to human design. Now you've taken the analogy too far.DrREC
December 15, 2011
December
12
Dec
15
15
2011
08:47 PM
8
08
47
PM
PDT
Motors were known to be designed BEFORE they were found in cells.John D
December 15, 2011
December
12
Dec
15
15
2011
06:37 PM
6
06
37
PM
PDT
Is there any doubt I designed my post?
Of course not. Because it contains a specification: a meaning beyond the mere description of the letters (electrons in this case) themselves. And we recognized that pattern after we saw it, not because we knew exactly what you would write.Eric Anderson
December 15, 2011
December
12
Dec
15
15
2011
04:17 PM
4
04
17
PM
PDT
DrREC:
I’m still waiting for someone to explain how they would detect design without a pre-established or independently known pattern.
Well, it has been pretty well laid out by Dembski and Meyer, but I have a hunch you may not accept their explanation, so I'm not sure that will convince you. "Pre-established" and "independent" are different issues, so perhaps part of the difficulty is that you may be confusing these concepts? Independence is an important factor, in that the specification has some meaning or function beyond the pure description of the physical system. That is why the prime numbers example is recognized as a specification: prime numbers have meaning beyond just the description of the string of digits themselves. Pre-established, however is not a requirement. Since you haven't answered (or perhaps haven't seen) the question I posed, I will answer it. Yes, it is possible to recognize a specification and subsequently determine design even if we don't know the precise specification we should be looking for from the outset. We do it all the time in our regular everyday experience. The idea that we have to identify and articulate the specification beforehand leads to outrageous and absurd conclusions. By that logic, we can never know if Stonehenge, the statues on Easter Island or any other never-before-seen thing is designed (we certainly didn't know the specification beforehand). Or consider the following example, based on that faulty logic: Two research colleagues are working to decipher a code that has not been deciphered before. The researchers work independently on separate strings of the same code for days, without success. One afternoon Researcher A bumps into Researcher B at the water cooler. Researcher B excitedly tells Researcher A that as he was looking at the symbols in a certain way he finally figured out the code and tells Researcher A what to look for. Researcher A returns to his office, lays out the symbols and, sure enough, everything falls into place. Now, based on DrREC's logic, we have the following absurd result: Researcher A can rightfully and validly claim that the code was designed because he had a specification to look for when he walked back into his office after the water cooler conversation. However, Researcher B can never conclude that the code was designed, because he discovered the code without having a pre-specification in mind. Pre-specification is not a requirement. We infer design all the time without knowing beforehand what the precise specification will be.Eric Anderson
December 15, 2011
December
12
Dec
15
15
2011
04:10 PM
4
04
10
PM
PDT
You have no theory to back up your characterization of functional space as unbridgeable.
And I need one why? There are an infinite number of implausible ideas I have no theories to disprove. Should they all be taken seriously until I get around to disproving them?
When actual gaps have been tested, there are viable intermediate sequences.
I've bridged every gap I've tested with my six-foot plank. That's how I know I can walk on it to Hawaii.ScottAndrews2
December 15, 2011
December
12
Dec
15
15
2011
02:48 PM
2
02
48
PM
PDT
lastyear, Human do not specify what products nucleic acid sequences result in, the physical protocols instantiated in the genetic translation machinery does that. We just came along later and observed it. But because we observed it does not change the fact that the specification is built into the system. To suggest otherwise could not be a more anthropocentric statement.Upright BiPed
December 15, 2011
December
12
Dec
15
15
2011
02:33 PM
2
02
33
PM
PDT
Petrushka, You said, "you cannot distinguish a degraded functional sequence from a random sequence.” It is not possible to "translate" anything into gibberish, only into something with the appearance of gibberish. But no one is even talking about translating anything into gibberish. Here's my scenario again to compare against your statement:
If you give instructions to a man and he translates them in language you don’t understand to a second man who then follows most but gets a few things wrong, you cannot ascribe any functional information to those translated instructions because you don’t know which part was degraded.
In this case you cannot distinguish any of it from a random sequence, because you do not understand it. Unless, that is, if you discern its specification by its functional effect, which is clearly possible even though it is likely a degraded functional sequence. Again, your statement is simple. "You cannot distinguish a degraded functional sequence from a random sequence.” My illustration is also simple and shows that your statement is clearly wrong. Without understanding the language at all, you can tell that it is not random because it conveys at least some of your specifications, even if the content is degraded. Perhaps what you mean to say is that you can determine the presence of functional content but not measure it. But that doesn't matter either. I could argue that your posts contain no functional content and that to prove otherwise you must rigorously calculate that content. I will inevitably find fault with your calculation and insist that if you cannot measure the bits of information, then it is not specified. I may find a typo and quibble over whether it renders the post non-specified because you didn't intent to type it. Wouldn't that be a really silly way to determine whether something contains complex, specified information?ScottAndrews2
December 15, 2011
December
12
Dec
15
15
2011
02:07 PM
2
02
07
PM
PDT
1 2 3 4 5

Leave a Reply