Uncommon Descent Serving The Intelligent Design Community

Who really understands what an island of function is or is not?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Earlier today, I decided to check back at TSZ, to see if they have recovered from the recent regrettable hack attack. They are back up, at least in part. The following however, caught my eye:

Intelligent design proponents make a negative argument for design.  According to them, the complexity and diversity of life cannot be accounted for by unguided evolution (henceforth referred to simply as ‘evolution’) or any other mindless natural process.  If it can’t be accounted for by evolution, they say, then we must invoke design . . . .

What mysterious barrier do IDers think prevents microevolutionary change from accumulating until it becomes macroevolution?  It’s the deep blue sea, metaphorically speaking.  IDers contend that life occupies ‘islands of function’ separated by seas too broad to be bridged by evolution.

In this post (part 2a) I’ll explain the ‘islands of function’ metaphor and invite commenters to point out its strengths and weaknesses.  In part 2b I’ll explain why the ID interpretation of the metaphor is wrong, and why evolution is not stuck on ‘islands of function’.

This is quite wrong-headed, and easily explains part of why there is so little progress in exchanges:

1 –> The design inference is a positive inference on well tested, inductively established sign, not a negative inference.  For instance, the functionally specific, complex information [FSCO/I]  — notice the blend of complexity with specificity to achieve function — in the above clip is diagnostic of design as its most credible source. Something that is easily empirically verified on a base of literally billions of cases. (And there are no credible known exceptions, or that would have been trumpeted to the highest heavens all over the Web and in the literature.)

2 –> The similar inductive status of the island of function effect can also easily be shown from this text. There are a great many ways in which the 899 ASCII characters used in the above clip can be arranged: 128^899 ~ 2.41 *10^1894. (The number of Planck-time states of the 10^80 or so atoms of our observed cosmos since its credible beginning is less than 10^150, a very large number, but one that is utterly dwarfed by the set of possibilities for 899 ASCII characters.) Very few of them would convey the above message in recognisable English and while some noise — such as typos etc — can be tolerated, all too soon injection of random noise — a random walk on the island of function — would destroy function.

3 –> This is a simple illustration of a commonplace fact of life for complex, functionally specific entities made up from multiple, well-matched components that must be properly arranged and coupled together to achieve function. Taking our solar system as a zone of interest, the relevant components can be scattered in a great many ways indeed none of which will be functional. Even if clumped, a much smaller but still huge number of arrangements exists, the overwhelming majority of which possibilities will have no function.

4 –> Only in certain very special clusters of configurations (reflecting the amount of tolerance for configurations in a given neighbourhood) will there be functional configurations. So, we are at the issue that Dembski outlined long ago now, in No Free Lunch:

p. 148: “The great myth of contemporary evolutionary biology is that the information needed to explain complex biological structures can be purchased without intelligence. My aim throughout this book is to dispel that myth . . . . Eigen and his colleagues must have something else in mind besides information simpliciter when they describe the origin of information as the central problem of biology.

I submit that what they have in mind is specified complexity [[cf. here below], or what equivalently we have been calling in this Chapter Complex Specified information or CSI . . . .

Biological specification always refers to function . . . In virtue of their function [[a living organism’s subsystems] embody patterns that are objectively given and can be identified independently of the systems that embody them. Hence these systems are specified in the sense required by the complexity-specificity criterion . . . the specification can be cashed out in any number of ways [[through observing the requisites of functional organisation within the cell, or in organs and tissues or at the level of the organism as a whole] . . .”

p. 144: [[Specified complexity can be defined:] “. . . since a universal probability bound of 1 [[chance] in 10^150 corresponds to a universal complexity bound of 500 bits of information, [[the cluster] (T, E) constitutes CSI because T [[ effectively the target hot zone in the field of possibilities] subsumes E [[ effectively the observed event from that field], T is detachable from E, and and T measures at least 500 bits of information . . . ”

5 –> This sort of functional specificity brings out how the sort of functional cluster in view is informational, i.e. there is a specific pattern, a set of nodes and arcs that has to be arranged in a form or pattern that allows function, within a fairly narrow range of tolerance. That range of neighbouring functional configs defines an island of function. Where also WLOG, as a nodes and arcs pattern can be reduced to a structured string [how AutoCAD etc work] this can be translated into string structures, with as many degrees of freedom as there are relevant bits.

6 –> Nor is this sort of remark exactly news, on Dec 30 2011, I noted here at UD as follows (something that was actually adverted to in the TSZ thread, but was not taken seriously by objectors to design . . . ):

1 –> Complex, multi-part function depends on having several well-matched, correctly aligned and “wired together” parts that work together to carry out an overall task, i.e. we see apparently purposeful matching and organisation of multiple parts into a whole that carries out what seems to be a goal. The Junkers Jumo 004 Jet engine in the above image is a relevant case in point.

2 –> Ever since Wicken posed the following clip in 1979, this issue of wiring-diagram based complex functional organisation has been on the table as a characteristic feature of life forms that must be properly explained by any successful theory of the causal roots of life. Clip:

‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems.  Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [[i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [[originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [[“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65. (Emphases and notes added. Nb: “originally” is added to highlight that for self-replicating systems, the blue print can be built-in.)]

3 –> The question at stake in the thread excerpted from above, is whether there can be an effective, incremental culling-out based on competition for niches and thence reproductive success of sub-populations that will create ever more complex systems that will then appear to have been designed.

4 –> Of course, we must notice that the implication of this claim is that we are dealing with in effect a vast continent of possible functional forms that can be spanned by a gradually branching tree. That’s a big claim, and it needs to be warranted on observational evidence, or it becomes little more than wishful thinking and grand extrapolation in service to an a priori evolutionary materialistic scheme of thought.

5 –> I cases where the function in question has an irreducible core of necessary parts, it is often suggested that something that may have had another purpose may simply find itself duplicated or fall out of use, then fit in with a new use. “Simple.”

6 –> NOT. For, such a proposal faces a cluster of challenges highlighted earlier in this UD series as posed by Angus Menuge [oops!] for the case of the flagellum:

For a working [bacterial] flagellum to be built by exaptation, the five following conditions would all have to be met:

C1: Availability. Among the parts available for recruitment to form the flagellum, there would need to be ones capable of performing the highly specialized tasks of paddle, rotor, and motor, even though all of these items serve some other function or no function.

C2: Synchronization. The availability of these parts would have to be synchronized so that at some point, either individually or in combination, they are all available at the same time.

C3: Localization. The selected parts must all be made available at the same ‘construction site,’ perhaps not simultaneously but certainly at the time they are needed.

C4: Coordination. The parts must be coordinated in just the right way: even if all of the parts of a flagellum are available at the right time, it is clear that the majority of ways of assembling them will be non-functional or irrelevant.

C5: Interface compatibility. The parts must be mutually compatible, that is, ‘well-matched’ and capable of properly ‘interacting’: even if a paddle, rotor, and motor are put together in the right order, they also need to interface correctly.

( Agents Under Fire: Materialism and the Rationality of Science, pgs. 104-105 (Rowman & Littlefield, 2004). HT: ENV.)

8 –> The number of biologically relevant cases where C1 – 5 has been observed: ZERO.

9 –> What is coming out ever more clearly is this:

when a set of matching components must be arranged so they can work together to carry out a task or function, this strongly constrains both the choice of individual parts and how they must be arranged to fit together

A jigsaw puzzle is a good case in point.

So is a car engine — as anyone who has had to hunt down a specific, hard to find part will know.

So are the statements in a computer program — there was once a NASA rocket that veered off course on launch and had to be destroyed by triggering the self-destruct because of — I think it was — a misplaced comma.

The letters and words in this paragraph are like that too.

That’s why (at first, simple level) we can usually quite easily tell the difference between:

A: An orderly, periodic, meaninglessly repetitive sequence: FFFFFFFFFF . . .

B: Aperiodic, evidently random, equally meaningless text: y8ivgdfdihgdftrs . . .

C: Aperiodic, but recognisably meaningfully organised sequences of characters: such as this sequence of letters . . .

In short, to be meaningful or functional, a correct set of core components have to match and must be properly arranged, and while there may be some room to vary, it is not true that just any part popped in in any number of ways can fit in.

As a direct result, in our general experience, and observation, if the functional result is complex enough, the most likely cause is intelligent choice, or design.  

This has a consequence. For, this need for choosing and correctly arranging then hooking up correct, matching parts in a specific pattern implicitly rules out the vast majority of possibilities and leads to the concept of islands of function in a vast sea of possible but meaningless and/or non-functional configurations.

10 –> Consequently, the normal expectation is that complex, multi-part functionality will come in isolated islands. So also, those who wish to assert an “exception” for biological functions like the avian flow-through lung, will need to  empirically warrant their claims. Show us, in short.

11 –> And, to do so will require addressing the difficulty posed by Gould in his last book, in 2002:

. . . long term stasis following geologically abrupt origin of most fossil morphospecies, has always been recognized by professional paleontologists. [The Structure of Evolutionary Theory (2002), p. 752.]

. . . .  The great majority of species do not show any appreciable evolutionary change at all. These species appear in the section [[first occurrence] without obvious ancestors in the underlying beds, are stable once established and disappear higher up without leaving any descendants.” [p. 753.]

. . . . proclamations for the supposed ‘truth’ of gradualism – asserted against every working paleontologist’s knowledge of its rarity – emerged largely from such a restriction of attention to exceedingly rare cases under the false belief that they alone provided a record of evolution at all! The falsification of most ‘textbook classics’ upon restudy only accentuates the fallacy of the ‘case study’ method and its root in prior expectation rather than objective reading of the fossil record. [[p. 773.]

12 –> In that context, the point raised by GP above, that

. . .  once a gene is duplicated and inactivated, it becomes non visible to NS. So, intelligent causes can very well act on it without any problem, while pure randomness, mutations and drift, will be free to operate in neutral form, but will still have the whole wall of probabilistic barriers against them.

. . . takes on multiplied force.

___________

In short, the islands of function issue — rhetorical brush-asides notwithstanding — is real, and it counts.  Let us see how the evolutionary materialism advocates will answer to it.

7 –> So, what is the grand overturn that shows that this is all nonsense? The concept of rising fitness functions that allow incremental change:

Now suppose that it rains for 40 days and 40 nights. The rain fills up our landscape, forming a vast sea.  Only the mountain tops remain above the water as islands – the ‘islands of function’ that IDers are so fond of.

Our populations occupy the islands.  Sea level indicates the minimum fitness at which mutants remain viable. Small changes will create viable descendants at different spots on the island, though the population as a whole will gravitate toward the high spots. Larger changes will put the mutants underwater, where they will die out.

The idea, according to ID proponents, is that populations remain stranded on these islands of function.  Some amount of microevolutionary change is possible, but only if it leaves you high and dry on the same island.  Macroevolution is not possible, because that would require leaping from island to island, and evolution is incapable of such grand leaps.  You’ll end up in the water.

There is some truth to the ‘islands of function’ metaphor, but it also has some glaring shortcomings that ID proponents almost always overlook.  I will mention some of the strengths and  shortcomings in the comments, and I know that my fellow commenters will point out others.

8 –> To which the obvious answer is that the requisites of complex, specific, integrated function define islands which are isolated by seas of non-function that need to be bridged, not just on paper but observationally AND WITHIN ACCESSIBLE SEARCH RESOURCES (where the atomic resources of our solar system make the BLIND SEARCH creation of 500 bits of novel FSCO/I maximally implausible, and  those of the observed cosmos max out at 1,000 bits.

9 –> In particular, the warrant for bridging islands of function requires that such a claim be justified observationally. Starting, with the origin of the very first body plan, and continuing with the origin of further body plans, requiring credibly 100 – 1,000 k bits of genetic information in the first case (in a string data structure) and of order 10 – 100 millions in onward cases for multicellular body plans.

10 –> Hardly less fatal, is something implied in what I just outlined. We are not dealing with known, close-by islands that were mountain-tops flooded out, but an unknown and patently vast seascape that may or may not contain islands of function, so far as the blind chance and mechanical necessity are concerned that are the only means of search permissible under the relevant conditions.  And, where there are strictly limited search resources that max out at 500 – 1,000 bits under very generous conditions.

11 –> So, what is really needed is to start with a warm little pond or the like scenario and get a suitable concentration of monomers and then collect a living cell, per reasonable observation. One that has encapsulation, with gating of materials flows, and carries out self-replication and metabolism using a genetic code mechanism and the like. then, we need to see observational warrant for going from that to novel body plans with specialised properly organised cell types, tissues, organs, systems etc, constituting a new organism. This simply has not been done, nor is such in prospect.

12 –> Absent that, what we have is gross extrapolation of micro changes in already existing body plans, substituted for what was really needed. That is, we need to explain crossing the sea of non-function by blind chance mechanisms that would put us on shorelines of function, not the incremental hill climbing that can happen by all accounts once we are on such a shoreline. And, this must start, logically, with the very first body plan.

13 –> At the same time, we are surrounded by a world of technology that tells us that intelligent designers exist and are fully capable of creating FSCO/I rich systems. And we have whole disciplines and professions that study and practice the design of FSCO/I rich systems.

_________

So, which alternative explanation is more reasonable on the actual evidence, why? END

Comments
Hacked? How can they possibly know they were hacked unless they know the identity of the hacker? How unscientific, to assign "design" to what should be explained as a coincidental accumulation of random errors emerging as what only appears to be deliberate hacking. William J Murray
Funny question in the OP. TSZ got hacked. They are attempting to crawl out of an ocean of non-function back on to an island of function. Yet they deny such islands exist. Morons. Mung
F/N: I need -- power drop outs permitting -- to take up a couple of further points that help us understand what is happening rhetorically: _______________ AF, 5 (re FSCO/I): Adequately described? Adequately? On whose judgement? And where can this adequate description be found? Quantified? Are you seriously claiming you can quantify FSCO/I with regard to a biological example? Surely the way to confirm this assertion is to demonstrate that you can indeed do what you claim. KF, F/N to 5: cf 15 ff below. Your questions above are answered — have long been answered — here on at IOSE, which as can be easily seen, I wrote. You should be well aware of this 101, given your 8 years of observing the ID debates. KF, 15 (there is much more there): for an object of complexity such that the nodes and arcs Wicken wiring diagram patterns [cf his 1979 remark on that topic, which is the actual root of the descriptive terms and initials FSCI and FSCO/I . . . . ] to create a flyable jet involve at least 500 – 1,000 structured yes/no questions [= 500 - 1,000 bits], the atomic resources of our solar system or of the observed cosmos as a whole will be blatantly inadequate for us to expect that such a structure would spontaneously emerge through blind forces. But, FSCO/I rich entities are routinely created by planned, deliberate work. In short, counterflow leading to FSCO/I is a strong sign of deliberate work according to an organising plan. AF, 26: Regarding your personal ID argument involving FSCO/I, I am not interested in pursuing it because it is currently solely your personal ID argument. None of the prominent ID proponents that I know of, Behe, Dembski, Meyer, Nelson etc. appear to have noticed FSCO/I, none of them have commented on it that I have noticed and certainly no-one has endorsed it. The DI or Evolution News have not picked up on it either. Your argument is currently deniable as ID. I reiterate that nothing you write about FSCO/I has any connection to reality and specifically to biology. Repeated demands for some genuine computation of a real biological example meet with empty bluster. Colour me unimpressed. WJM, 28: Alan Fox’s response at @26 shows that he isn’t interested in understanding ID arguments, and apparently he admits he’s incapable of understanding them. It doesn’t matter to him if KF or UB have presented incontrovertible arguments/evidence for ID and have shown it incontrovertibly necessary for life – because he’s not interested in it. He doesn’t understand it and isn’t interested in understanding it. He dismisses them with a wave of his “it’s just your personal view” hand. Apparently, all Alan Fox cares about is to sit on the sideline and cheer as experts on his “side” offer rebuttals he doesn’t understand to arguments/evidence he doesn’t comprehend, and aim negative comments at those who disagree with his experts. KF, 29 (much more there, starting with the significance of config spaces and AF's previously shown want of capacity to understand such then developing the link to search by blind chance plus mechanical necessity vs design): Anything that can be symbolised as a nodes and arcs “wiring diagram” pattern can be reduced to a structured set of strings. Indeed that is how virtual realities and computer aided engineering drawings etc are done. So, once we have a case where we are looking at complex objects and objects that are functionally specific similar to text in English or computer programs, etc, the same challenge will hold. We simply do not have the atomic resources and the time available to search such out enough to have confidence that we are likely to hit on functional zones. And, it is not a reasonable thing to assert that almost any and any configuration will do in the relevant sense. We see that easily enough with English text and computer programs or circuits, but he same holds for protein fold domains, which on empirical evidence are deeply isolated in AA sequence space, was it 1 in 10^65 or so? It gets worse, when we consider that we need hundreds of types of specific molecules, correctly organised to function as a metabolising, self replicating automaton. Thus, OOL is the first and decisive hurdle for a blind chance and necessity model of origin of the world of life. So strong is the pattern that we have every reason to expect, that those who claim otherwise or imply otherwise, need to provide strong empirical demonstrations. Which, for 150 years almost, have simply not been forthcoming, never mind the screaming headlines we have seen from time to time and the confident manner declarations. There is every good reason to see that life based on cells is designed, per FSCO/I as a highly reliable sign of design. And once that hurdle is passed, there is no good reason to try to lock out design in explaining body plans onwards. Optimus, 35 (replying to AF at 26): Signature in the Cell (by Meyer) argues quite deliberately from the presence of complex, functionally-specified information to ID. FSCO/I is not a construction of KF’s imagination. You’re badly mistaken on this one, Alan. KF, 40 (responding to Optimus -- the bulk of the comment is reproduced as it is pivotal): Durston, et al speak of functional sequence complexity and have developed a similar metric pivoting on Shannon’s H, the average info per symbol (in a context where symbols in an alphabet typically communicate different quantities of info), though they do not apply an explicit threshold value as I do. In my presentations, I have taken up their metrics and have shown how they fit with the thresholds I have used. There are others who say much the same, up to Dembski in NFL who speaks of how in biology specification is cashed out as function. Then there is what Wicken had to say way back in 1979, which is where I hit on the summary term from:
“organized systems must be assembled element by element according to an [[originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’”
. . . and also what Orgel said in ’73, bearing in mind that in biological contexts, function is how specification is expressed:
living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity.”
Just take these two key cites and the term, functionally specific complex organisation and/or associated information — FSCO/I — practically drops out as a short-hand description. The problem is, it seems that in too many cases we are dealing with ideologues who do not care a whit about truth, reasonableness or fairness, so long as they can spew forth a superficially plausible dismissive assertion. Eugen, 43 (to AF): May I recommend you to read ID Foundation series here on UD starting with https://uncommondescent.com/intelligent-design/id-foundations-the-design-inference-warrant-and-the-scientific-method/ [--> Notice, this is not a reference to the weak argument correctives, but to a series of articles on pivotal concepts taken up as my first major set of posts for the UD blog as a regular contributor; AF did not even take time to see the difference from the WAC's as can be seen below.] It’s time to go …. https://www.youtube.com/watch?v=JNezZV4648I AF, 46: I am well awre of the FAQ (and it’s incoherency). Am I to take that as official ID movement endorsement of KF’ one long diatribe of bad analogies? Let’s see it stated explicitly from a big honcho of ID (Behe, Dembski, Meyer, mung) that they agree with, accept and support FSCO/I as a scientific proposition. ____________ What has happened here? Obviously, AF has not even bothered to notice that his demand for endorsement by a major ID proponent has been directly answered, i.e Meyer's Signature in the Cell pivots on the FSCO/I concept, whether or not he uses the term. In addition it is clear that the term is a simple description for a pattern noticed long before the ID movement emerged, in the 1970's by Orgel and Wicken. Going further, observe how AF slides from a reference to the UD ID foundations series, to a dismissive remark on the weak argument correctives (which he grossly mischaracterises in dismissing it). He has also made a demand for biological instantiation, ignoring that it is a commonplace since at least the 1970's that DNA and proteins are informational macromolecules that store well past 500 - 1,000 bits of information, explicitly or implicitly. (That is part of the context of Orgel and Wicken above.) In addition, taking the Chi_500 metric, AF has been repeatedly pointed to the Durston et al result that shows quantitative functional sequence complexity values for 15 protein families. (and their functional sequence complexity is again very closely related conceptually to the term I have used descriptively then have generated metrics for, FSCO/I. Shannon's H is a metric of average info per symbol, as I invited AF to examine previously.) Indeed, in outlining how the Chi_500 metric is generated by doing a log reduction of Dembski's 2005 metric, I noted:
since 10^120 ~ 2^398, we may "boil down" the Dembski metric using some algebra -- i.e. substituting and simplifying the three terms in order -- as log(p*q*r) = log(p) + log(q ) + log(r) and log(1/p) = – log (p): Chi = – log2(2^398 * D2 * p), in bits, and where also D2 = Phi S(T) Chi = Ip – (398 + K2), where now: log2 (D2 ) = K2 That is, chi is a metric of bits from a zone of interest, beyond a threshold of "sufficient complexity to not plausibly be the result of chance," (398 + K2). So, (a) since (398 + K2) tends to at most 500 bits on the gamut of our solar system [[our practical universe, for chemical interactions! ( . . . if you want , 1,000 bits would be a limit for the observable cosmos)] and (b) as we can define and introduce a dummy variable for specificity, S, where (c) S = 1 or 0 according as the observed configuration, E, is on objective analysis specific to a narrow and independently describable zone of interest, T: Chi = Ip*S – 500, in bits beyond a "complex enough" threshold . . . . xxii: So, we have some reason to suggest that if something, E, is based on specific information describable in a way that does not just quote E and requires at least 500 specific bits to store the specific information, then the most reasonable explanation for the cause of E is that it was designed. The metric may be directly applied to biological cases: Using Durston’s Fits values -- functionally specific bits -- from his Table 1, to quantify I, so also accepting functionality on specific sequences as showing specificity giving S = 1, we may apply the simplified Chi_500 metric of bits beyond the threshold: RecA: 242 AA, 832 fits, Chi: 332 bits beyond SecY: 342 AA, 688 fits, Chi: 188 bits beyond Corona S2: 445 AA, 1285 fits, Chi: 785 bits beyond xxiii: And, this raises the controversial question that biological examples such as DNA -- which in a living cell is much more complex than 500 bits -- may be designed to carry out particular functions in the cell and the wider organism.
In short, in easily accessible materials, AF's demands were met long before he made them. His attention was drawn to the fact. He ignored it and reiterated his talking points in willful disregard of duties of care to truth, fairness and even well merited correction. All that mattered to him, was to be able to get back to his dismissive, distractive talking points. And, we can bet that, if he can get away with it again, he will repeat the performance. Unfortunately, this sort of behaviour has been typical of the objectors we have had to deal with for years, and we see it here where they are on notice that they have to be civil. Elsewhere, it heads sharply downhill from there. So, we see what is really going on. And, we have a strong suspicion why, given that it is rapidly approaching six months since an open invitation was issued to any serious objector to make their case here at UD, with no serious takers to date. Obviously, the objectors cannot make their case on the straight merits. (VJT's current critical review of the limitations of a case where there was a serious try, shows a lot of why. Notice, this turns out to be a case of modelled intelligent design being used to try to make it out that Darwinian mechanism that specifically exclude design, can do the job.) Which speaks volumes about the claimed practically certain, all but indubitably factual nature of claimed universal common descent via chance variations and differential reproductive success in ecological niches leading to descent with unlimited modification. AF and ilk need to do much better than this. KF kairosfocus
Mung:
Strange. My own experience over at TSZ was just the opposite of what you claim KF could expect if he went there.
I second that and can prove it. Joe
Alan Fox:
I am well awre of the FAQ (and it’s incoherency).
YOU are the incoherency here, Alan. Your false accusation means nothing to us. Joe
AF: Why are you descending into patently false dismissive assertions, instead of dealing with matters on the merits? Why have you shown no sign of engaging the obvious gaps in your knowledge base -- as has emerged in recent days, starting with, you did not seem to be able to understand a phase space and a configuration space -- that so obviously render you incapable of making the dismissive judgements you have just asserted? Does this not suggest that the real problem is not the merits of the case but a priori ideological commitments on your part to materialism and/or its fellow traveller views? Please, think again, then speak more fairly and substantially next time. KF PS: My experience of the attitude and behaviour of some of those harboured at various objector sites, and the lack of policing of abusive commentary leads me to the conclusion that -- apart from to make a simple statement for record -- there is no point in trying to argue a matter on the merits with those who have no intention of being reasonable, have every intention of being abusive [including threatening my family], and/or with those who enable or harbour such. kairosfocus
Eugen I am well awre of the FAQ (and it's incoherency). Am I to take that as official ID movement endorsement of KF' one long diatribe of bad analogies? Let's see it stated explicitly from a big honcho of ID (Behe, Dembski, Meyer, mung) that they agree with, accept and support FSCO/I as a scientific proposition. Alan Fox
Strange. My own experience over at TSZ was just the opposite of what you claim KF could expect if he went there.
L I A R ;) Alan Fox
timothya:
You and your ID colleagues have not explained it.
How would you know?
An essential part of a scientific explanation of a system is to identify the mechanism by which a cause is connected to an effect.
Design is a mechanism. Agency involvemnet is another mechanism.
That involves answering the what, when, where, and how questions.
The what is whatever we are investigating. The "when, where and how" can only be answered by studying the design and all relevant evidence. IOW timmy, you don't know Jack about science and investigation. Joe
Alan Fox May I recommend you to read ID Foundation series here on UD starting with https://uncommondescent.com/intelligent-design/id-foundations-the-design-inference-warrant-and-the-scientific-method/ It's time to go .... https://www.youtube.com/watch?v=JNezZV4648I Eugen
TA: Still unable to acknowledge that design by a skilled, knowledgeable intelligent designer is a mechanism demonstrably -- with billions of cases in point -- capable of creating FSCO/I, while blind chance and/or mechanical necessity have no such observed track record, multiplied by a major search resources challenge in a solar system of 10^57 atoms and ~ 4.5 BY or an observed cosmos of 13.7 BY and ~ 10^80 atoms? Telling. KF kairosfocus
Kairosfocus. Your treatise is very clear and extremely compelling. Thank you very much. Box
Optimus, Thanks. Durston, et al speak of functional sequence complexity and have developed a similar metric pivoting on Shannon's H, the average info per symbol (in a context where symbols in an alphabet typically communicate different quantities of info), though they do not apply an explicit threshold value as I do. In my presentations, I have taken up their metrics and have shown how they fit with the thresholds I have used. There are others who say much the same, up to Dembski in NFL who speaks of how in biology specification is cashed out as function. Then there is what Wicken had to say way back in 1979, which is where I hit on the summary term from:
"organized systems must be assembled element by element according to an [[originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’"
. . . and also what Orgel said in '73, bearing in mind that in biological contexts, function is how specification is expressed:
" living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity."
Just take these two key cites and the term, functionally specific complex organisation and/or associated information -- FSCO/I -- practically drops out as a short-hand description. The problem is, it seems that in too many cases we are dealing with ideologues who do not care a whit about truth, reasonableness or fairness, so long as they can spew forth a superficially plausible dismissive assertion. That is sad, but it seems to be what we face. Those who run sites such as TSZ need to do some very sober reflection on what they are enabling. KF kairosfocus
Joe posted this:
Not so Mike. We have explaiend exactly where mechanical tendecies stop. Behe discusses it at length. Dembski, Meyer and UD’s members have also.
You and your ID colleagues have not explained it. You simply assert that there is such a boundary. Yet ID continues to offer estimates of when that vary from from "now" to "6000 years ago" to "4.7 billion years ago" (ask Bill Dembski about the when question), where varies from "wherever life first blossomed" to "the Garden of Eden" to "Mount Ararat" to "everywhere at this moment", and how varies from "every aspect of life is engineered" to "this obscure aspect of one version of life is the Darwinian Giantkiller". An essential part of a scientific explanation of a system is to identify the mechanism by which a cause is connected to an effect. That involves answering the what, when, where, and how questions. The questions that ID refuses to address. timothya
Alan Fox:
Regarding your personal ID argument involving FSCO/I, I am not interested in pursuing it because it is currently solely your personal ID argument.
L I A R Mung
AF:
I was merely pointing out that the TSZ thread author (KeithS AKA thaumaturge) that you quote in your OP is no longer able to respond to you here and that you are welcome at “The Skeptical Zone” if you want to continue debating with him.
Strange. My own experience over at TSZ was just the opposite of what you claim KF could expect if he went there. Mung
Why can I jump 3 feet in the air but not 300?
Jumping 3 feet in the air improves your chances of survival and reproduction, jumping 300 feet in the air does not (there's no chicks up there, in spite of what you may have been told). Ain't evolutionary explanations grand!? Mung
AF @ 26
Regarding your personal ID argument involving FSCO/I, I am not interested in pursuing it because it is currently solely your personal ID argument. None of the prominent ID proponents that I know of, Behe, Dembski, Meyer, Nelson etc. appear to have noticed FSCO/I, none of them have commented on it that I have noticed and certainly no-one has endorsed it. The DI or Evolution News have not picked up on it either. Your argument is currently deniable as ID. I reiterate that nothing you write about FSCO/I has any connection to reality and specifically to biology. Repeated demands for some genuine computation of a real biological example meet with empty bluster. Colour me unimpressed. But I’m just a layman, maybe I just can’t spot the subtlety of your argument. Take up the cudgels with professional scientists. Try the Skeptical Zone for a start.
Signature in the Cell (by Meyer) argues quite deliberately from the presence of complex, functionally-specified information to ID. FSCO/I is not a construction of KF's imagination. You're badly mistaken on this one, Alan. Optimus
OK perhaps blind watchmaker evolution isn't a blind search because it isn't a search at all. Whatever happens, happens and whatever survives to reproduce survives to reproduce. If something is found along the way then it's "oh joy" and then carry on. Joe
and elzinga chimes in:
When one asks an ID/creationist where along the chain of complexity of atomic and molecular assemblies the “natural” and “mechanical” tendencies stop and “programs” and “information” take over, one never gets an answer.
Not so Mike. We have explaiend exactly where mechanical tendecies stop. Behe discusses it at length. Dembski, Meyer and UD's members have also. It's just that nothing we say is ever good enough for you. You think that stuff just emerges- how the heck can we test that, Mike? AGAIN- materialists have all the power as the rules of scientific investigation mandate that materialistic processes be eliminated before a design inference can be made. The point being is if you don't like our design inference, don't blame us. Our inference follows the rules of scientific investigation. And you have all the power to step up and say "Hey we can demonstrate materialistic processes producing what you say requires a designer", then we will actually have something to respond to. Joe
Great, olegt thinks that blind watchmaker evolution isn't a blind search. He also told me that chromosomes are one long polymer called the DNA: Oleg sez:
Chromosomes. are. all. connected. It is one long polymer. Called the DNA.
Natural selection is blind, without purpose and mindless. It's just differential reproduction due to heritable chance variation. Having more offspring does not = producing the diversity of life. And natural selection reduces diversity anyway. It is obvious that olegt doesn't understand biology nor evolutionism. Joe
Joe: Has OlegT bothered to read Orgel, back in 1973, in the text that INTRODUCED the term specified complexity and set up the modern discussion? Similarly, this sort of issue was addressed long ago in the foundation technical work for design theory, TMLO. For instance in Ch 8 there is a rather specific discussion of order vs complexity that is specified vs randomness using text strings and polymers or crystals. Let me cite Orgel:
. . . In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity. [[The Origins of Life (John Wiley, 1973), p. 189.]
That is, crystals insofar as they are specified, show mechanical necessity reflected in order, not an information rich pattern of aperiodic arrangements that lead to a function that relies on the specific pattern, as happens with say a protein. Insofar as we have crystals like star-shaped snowflakes, the complexity is separate from the specification, i.e. it is based on chance. What we do not have is function depending on the joint operation of specificity and complexity. Break a snowflake in two and you have two snowflakes, maybe less symmetrical. Break a program in two haphazardly, and it most likely will fail, as will similarly be the likely result of breaking a sentence. As for breaking a protein AA string . . . Of course, in principle we could manipulate the micro environment that forms a star shaped snowflake, and code information into it, say in the prong heights on the arms, similar to a Yale lock type key. But that is not what is usually happening. That is, the evidence is that we have ill informed or manipulative talking points used to indoctrinate those looking for just that, talking points to block having to address a significant issue that they apparently do not want to face on its actual merits. And all that stuff about oh you spoke at length, in that context, is irrelevant and distractive led out to being ad hominem. I answered to someone who after 8 years of making criticisms did not know "A from bull foot" (that's a Jamaican idiom) about basics, and gave him a personal tutorial. After this, I do not expect AF to make any assertion that he does not know what a configuration space is about, and why it is that FSCO/I is an index of design as credible cause. OlegT either has not bothered to find out the basics about how the design inference filter works on a per aspect basis, or is seeking to manipulate those who look to him for intellectual leadership. Whichever is the case, he has failed to do basic duties of care to get the facts straight and present them straight before speaking out critically. Sadly telling. Again. KF kairosfocus
Strange, olegt seems to think that crystal growth and an issue for kairosfocus. Joe
AF: It is quite clear that this is not a "personal" argument, but also that you are not in a position to understand what a configuration space -- which is linked to concepts common in mathematics and physics since Gibbs et al in C19 -- is. The argument is actually fairly easy to access, once you look at the simple case of an explicit string: * - * - * . . . - *, of length n. WLOG, we can use ASCII strings, with 128 states per position [or, binary ones with 2 states per position], as anything of consequence can be coded in this way, such as engineering drawing packages demonstrate. We then have a set of possible values, for binary strings, from 000 . . . 0 to 111 . . . 1 inclusive. The config space for such a string is simply the list of possible values, counted up. In short, to go concrete: imagine a very wide memory space, able to hold 899 ASCII characters per position, and then running from all zeros, counted up to all 1s. In a simple 3-bit case: 000 001 010 011 100 101 110 111 Such a space would be easily searceahble by chance and necessity, but one that is 899 ASCII characters wide is of a different order indeed. For the text example used, with 899 ASCII characters, that is equivalent to a bit string of length 7 * 899 = 6,293 bits wide. The number of possible configurations is 2^6,293, or 2.41*10^1894 -- the number of rows required to exhaust the possibilities. Immediately, Houston, we have a problem. There are only 10^57 atoms in our solar system, and ~ 10^80 in our observed cosmos. Taking the latter, and allowing states to change every 10^-45 s, a generous estimate of the Planck limit for the fastest processes that have physical meaning, we are looking at 10^150 states in 10^25s, an estimate of the thermodynamic lifespan of the observed cosmos. (About 50 million times the time typically said to have elapsed since the big bang.) Our observed cosmos cannot scratch the surface of the possibilities, much less exhaust them or sample such a high fraction that it is reasonable to expect unusual clusters of configs to get captured in the sample. So, we have a serious observational limit for blind sampling. If there are special narrow zones in the space of possibilities (such as texts in English), on random sampling or mechanical processes, we have no good reason to catch them in our little solar system sized or observed cosmos sized cast net. This is of course the precise reason for the islands of function concept, it highlights that there are such zones that can be specified other than by citing the configs by listing them or the like. Overwhelmingly what sampling theory tells us to expect on blind samples -- even, fairly biased ones -- will be the bulk of the possibilities, i.e. gibberish. Notice, I have NOT made a probability calculation [that heads off many a side track in the debates over the years . . . ], I have only adverted to the issue of the challenge of blind sampling. How then do we so commonly see long texts in English? Not by blind chance or mechanical necessity or both, but by intelligent design. We know English and we compose messages in English intelligently. This is a simple, direct illustration of the long since abundantly demonstrated difference in capability of blind chance and mechanical necessity vs design. Also, observe, this is without loss of generality [WLOG]. Anything that can be symbolised as a nodes and arcs "wiring diagram" pattern can be reduced to a structured set of strings. Indeed that is how virtual realities and computer aided engineering drawings etc are done. So, once we have a case where we are looking at complex objects and objects that are functionally specific similar to text in English or computer programs, etc, the same challenge will hold. We simply do not have the atomic resources and the time available to search such out enough to have confidence that we are likely to hit on functional zones. And, it is not a reasonable thing to assert that almost any and any configuration will do in the relevant sense. We see that easily enough with English text and computer programs or circuits, but he same holds for protein fold domains, which on empirical evidence are deeply isolated in AA sequence space, was it 1 in 10^65 or so? It gets worse, when we consider that we need hundreds of types of specific molecules, correctly organised to function as a metabolising, self replicating automaton. Thus, OOL is the first and decisive hurdle for a blind chance and necessity model of origin of the world of life. So strong is the pattern that we have every reason to expect, that those who claim otherwise or imply otherwise, need to provide strong empirical demonstrations. Which, for 150 years almost, have simply not been forthcoming, never mind the screaming headlines we have seen from time to time and the confident manner declarations. There is every good reason to see that life based on cells is designed, per FSCO/I as a highly reliable sign of design. And once that hurdle is passed, there is no good reason to try to lock out design in explaining body plans onwards. But, but, but, what if there is a much larger cosmos than we have seen? The problem here is first that this is speculation not empirical science, we have crossed over into philosophy. In that discipline, every significant alternative sits at the table as of right, and is compared on comparative difficulties, and we have to live with diverse views. No methodological naturalism lock-outs and censorship, in short. (Not that such were ever justified in science . . . ) next, we are now looking at cosmological fine tuning, and that points strongly to design of a cosmos set up for life, with a designer with the skill knowledge and power to built cosmos or multiverse. Frankly, necessary beings are on the table, and thus eternal beings, thence the source of a unified cosmos order that exhibits every sign of mathematical design right down to the roots. Occam's razor starts shaving, and when the shaving is over, we are looking at a unified eternal mind with the power to build universes. Now, lastly, I see an "invitation" to go to where people who have shown every uncivil pattern of behaviour, threatening my family, trying outing tactics, defacing pictures they found, slander etc are given free course. I see utterly no good reason to immerse as much as a toe in such a festering swamp. If you cannot pull together the self discipline to engage on the merits, with a civil tongue in your head, by rules that would pass in a living room in polite company, I am simply not interested. I have seen too much, and had too much done; to the point where some of what has been done would be a police matter if it were done face to face. Such an "offer" is declined and cannot be a genuine and serious one, since you must know the issues and concerns I speak of. If your side -- as the track record shows -- cannot sustain the decency to behave themselves, that simply underscores the sort of concerns that Plato put on the table so long ago now, in The Laws, Bk X:
Ath. . . . [[The avant garde philosophers and poets, c. 360 BC] say that fire and water, and earth and air [[i.e the classical "material" elements of the cosmos], all exist by nature and chance, and none of them by art, and that as to the bodies which come next in order-earth, and sun, and moon, and stars-they have been created by means of these absolutely inanimate existences. The elements are severally moved by chance and some inherent force according to certain affinities among them-of hot with cold, or of dry with moist, or of soft with hard, and according to all the other accidental admixtures of opposites which have been formed by necessity. After this fashion and in this manner the whole heaven has been created, and all that is in the heaven, as well as animals and all plants, and all the seasons come from these elements, not by the action of mind, as they say, or of any God, or from art, but as I was saying, by nature and chance only. [[In short, evolutionary materialism premised on chance plus necessity acting without intelligent guidance on primordial matter is hardly a new or a primarily "scientific" view! Notice also, the trichotomy of causal factors: (a) chance/accident, (b) mechanical necessity of nature, (c) art or intelligent design and direction.] . . . . [[Thus, they hold that t]he Gods exist not by nature, but by art, and by the laws of states, which are different in different places, according to the agreement of those who make them; and that the honourable is one thing by nature and another thing by law, and that the principles of justice have no existence at all in nature, but that mankind are always disputing about them and altering them; and that the alterations which are made by art and by law have no basis in nature, but are of authority for the moment and at the time at which they are made.- [[Relativism, too, is not new; complete with its radical amorality rooted in a worldview that has no foundational IS that can ground OUGHT. (Cf. here for Locke's views and sources on a very different base for grounding liberty as opposed to license and resulting anarchistic "every man does what is right in his own eyes" chaos leading to tyranny. )] These, my friends, are the sayings of wise men, poets and prose writers, which find a way into the minds of youth. They are told by them that the highest right is might [[ Evolutionary materialism leads to the promotion of amorality], and in this way the young fall into impieties, under the idea that the Gods are not such as the law bids them imagine; and hence arise factions [[Evolutionary materialism-motivated amorality "naturally" leads to continual contentions and power struggles; cf. dramatisation here], these philosophers inviting them to lead a true life according to nature, that is, to live in real dominion over others [[such amoral factions, if they gain power, "naturally" tend towards ruthless tyranny], and not in legal subjection to them.
In short, pardon, but your nihilism (and/or your enabling of such nihilism) is showing, and it tells sensible people that we should never trust such with significant power. Another reason to see that something is very wrong with the agenda we are dealing with. KF kairosfocus
Alan Fox's response at @26 shows that he isn't interested in understanding ID arguments, and apparently he admits he's incapable of understanding them. It doesn't matter to him if KF or UB have presented incontrovertible arguments/evidence for ID and have shown it incontrovertibly necessary for life - because he's not interested in it. He doesn't understand it and isn't interested in understanding it. He dismisses them with a wave of his "it's just your personal view" hand. Apparently, all Alan Fox cares about is to sit on the sideline and cheer as experts on his "side" offer rebuttals he doesn't understand to arguments/evidence he doesn't comprehend, and aim negative comments at those who disagree with his experts. So, yeah, go on over to TMZ so Alan can cheer and snipe from the sideline about arguments/evidence he doesn't understand and can't be bothered to figure out. William J Murray
Alan Fox:
Regarding your personal ID argument involving FSCO/I, I am not interested in pursuing it because it is currently solely your personal ID argument. None of the prominent ID proponents that I know of, Behe, Dembski, Meyer, Nelson etc. appear to have noticed FSCO/I, none of them have commented on it that I have noticed and certainly no-one has endorsed it. The DI or Evolution News have not picked up on it either. Your argument is currently deniable as ID.
BWAAAAAAAAAHAHAHAAAAHAHAHAHAHAAAAHAHAHHA Is that really your response, Alan? Really? Not that we should expect any better from you.
I reiterate that nothing you write about FSCO/I has any connection to reality and specifically to biology.
Do you have something better than your "say-so" Alan?
Take up the cudgels with professional scientists. Try the Skeptical Zone for a start.
LoL! That would be the last place someone should go to discuss something. First graders offer up better rationality than that ilk. And Alan is a fine example of the septic zone ilk. Colour us very unimpressed by the septic zone. Joe
Regarding your personal ID argument involving FSCO/I, I am not interested in pursuing it because it is currently solely your personal ID argument. None of the prominent ID proponents that I know of, Behe, Dembski, Meyer, Nelson etc. appear to have noticed FSCO/I, none of them have commented on it that I have noticed and certainly no-one has endorsed it. The DI or Evolution News have not picked up on it either. Your argument is currently deniable as ID. I reiterate that nothing you write about FSCO/I has any connection to reality and specifically to biology. Repeated demands for some genuine computation of a real biological example meet with empty bluster. Colour me unimpressed. But I'm just a layman, maybe I just can't spot the subtlety of your argument. Take up the cudgels with professional scientists. Try the Skeptical Zone for a start. Alan Fox
KF
Lastly, I do not control UD’s moderation and control policies, so I cannot say much other than that whoever Thaumaturge was, he had little or nothing of substance to contribute here. Had he been serious about substance, he could very easily have engaged this thread across yesterday, as he can still do from his preferred zone. (Not, that the track record says anything much, from what has been going on in the thread at TSZ all along. And, if he has indeed been banned, I am sure Mr Arrington can speak for himself as to why. I did note a warning that T was making no redeeming contribution to the blog by his ipse dixit sniping remarks.)
I wasn't asking for an explanation and I am aware of who the admin i here. I was merely pointing out that the TSZ thread author (KeithS AKA thaumaturge) that you quote in your OP is no longer able to respond to you here and that you are welcome at "The Skeptical Zone" if you want to continue debating with him. Of course, I don't expect you to but the invitation remains. Alan Fox
Kairosfocus- It is obvious that Allan "looked at" ID arguments with his eyes wide shut and his head firmly planted [snip --language]. Joe
Alan- keiths couldn't take on anyone for anything. He doesn't have a clue about science and he sure as heck cannot support unguided evolution as he doesn't know what evidence is. And nice to see that you still don't understand the emaning of the word "default". It's as if you are proud to be an ignorant loser. Joe
JWT, as in at 1 and 12 above, yup. [I just added, 5 above.] I have also pointed out just above that by his evident lack of knowledge base, AF is not in a position to make the confident manner dismissals on FSCO/I he has been making. And BTW, every post he has ever made at UD is a successful test of same as produced, reliably, by design. On testing cases of FSCI in biology, he should note that there is now a practice of genetic engineering, and that text has been written into DNA, so intelligently designed genetic information is a known fact. Beyond, at origin of life and of organism types, we were not there, so we are forced to infer on best explanation. AF knows or should know that there is a world of technology surrounding [in addition to the cases he himself produces in this and other threads], that allows us to be highly confident that FSCO/I is a reliable sign of design; that being the only credible causal factor with demonstrated and reliably tested capacity to produce FSCO/I. This, being backed up by various exercises that so far have manged to get 20 - 24 ASCII characters worth of info, where that is ~ 1 in 10^50 of a config space, we need a mechanism capable within the resources of our cosmos or solar system of working effectively in spaces 10^100 beyond that, 1 in 10^150. (That is the 500 bit end of the 500 - 1,000 bit scale config spaces we are talking about.) And, that his demand for in effect getting a time machine to visit the remote past as a criterion of accepting the strength of the sign, would if consistently applied wipe out all origins sciences, i.e this is selective hyperskepticism. I have suggested as a start -- after 8 years of observing the ID debates by his own admission -- that he look at my 101 on basic informatics. KF kairosfocus
AF: I notice, again, want of substance. Earlier, you did not seem to know what a configuration space is [and the broader phase/state space], and the context of such in the history of both mathematics and physics, with particular reference to statistical thermodynamics. If you do not know these you are unlikely to know of the informational school of thought on statistical thermodynamics and the force of the point that the entropy measures the want of information on specific microstate held by the components of a system, of which what we know is those gross aggregates, the macroscopic state determining variables. As a result, you are in no position to properly directly evaluate the link from classical to statistical to informational thermodynamics, and the linked issues on information raised by Elzinga. However, you are in a position to understand the title of the paper by J S Wicken in 1979, and to therefore understand that it is simply and blatantly false that seeing a bridge from thermodynamics to information issues in a context of the molecular machines and organisation of life forms, is a "proof" of being a creationist or the like. (Orgel, Yockey and others have pursued similar issues.) Onward, it is fair to observe that your dismissiveness to the FSCO/I metric is utterly unwarranted. You need to acquaint yourself with the bit as a direct unit of information, and its root in the discussion of strings of symbols and possible vs observed states with linked considerations on probability distribution of symbols in messages. (This, from my always linked note, may be a useful 101; certainly, it is linked to what I used to teach my students on this subject, with some fair degree of success.) Lastly, I do not control UD's moderation and control policies, so I cannot say much other than that whoever Thaumaturge was, he had little or nothing of substance to contribute here. Had he been serious about substance, he could very easily have engaged this thread across yesterday, as he can still do from his preferred zone. (Not, that the track record says anything much, from what has been going on in the thread at TSZ all along. And, if he has indeed been banned, I am sure Mr Arrington can speak for himself as to why. I did note a warning that T was making no redeeming contribution to the blog by his ipse dixit sniping remarks.) KF kairosfocus
@AF
that the author of the “Skeptical Zone” thread has just been banned by Barry
That's a good thing, though. Now be can claim having street cred for being a victim of the ID "inquisition". Hey, AF, kindly look at all your posts you have written here. Replies may be edited INTO your postings by the commander in chief of this thread. JWTruthInLove
Meant to say that I am referring to Keith S, who was posting under the "pseudonym "thaumaturge"? Alan Fox
Interesting that the author of the "Skeptical Zone" thread has just been banned by Barry, or he might have been able to take you up on some of your assertions. You are of course, as is anyone else here, welcome to engage in the TSZ thread directly, rather than relying on the "B" team for a dilatory bit of finger poking. Alan Fox
PPS: It seems that there is an almost endless pile of deep rooted, stubbornly clung to misconceptions to be corrected. As for the "ID is a default" talking point, it should be noted -- as was pointed out ever so many times to EL and others but has been consistently ignored or evaded without good reason -- that there are two defaults in succession in the per aspect design inference explanatory filter: (i) mechanical necessity giving rise to natural regularity of low contingency, and in the case of high contingency, (ii) chance leading to stochastically distributed outcomes. The inference to design is precisely made in light of observing that, from Plato's day to now, we have consistently observed three main causal factors: necessity, chance and the ART-ificial, aka design. So, it is inductively well warranted to explain on such. What design theory then seeks, is to assess the characteristic features of the three and to assign causal explanations appropriately, asking and seeking to answer whether there is in our world a pattern that reliably indicates design as cause. To which the answer -- backed up by literally billions of cases -- is yes, Wicken's functional organisation wiring diagram in a context where there is sufficient complexity joined to specificity that makes chance not a plausible explanation of the observed entity. AF wishes to dodge that overwhelming body of evidence, and to suggest that if we do not have good reason to infer to necessity or chance, the only reasonable conclusion is that we do not have any good explanation of the likely cause. In short, he has no good alternative, but does not -- obviously for ideological reasons -- want to face the direct weight of evidence that FSCO/I is a strong sign of design. And this goes to the point that he wishes to pretend that FSCO/I -- which starts with the functional organisation of the texts he posts here -- is a figment of our over-active, God of the gaps, imaginations. When challenged with the measured value of FSCO/I in his own posts, he simply refuses to read. And, when he is informed that design theory from the outset of the modern thinking, has consistently highlighted that from the world of life we may infer to design, but have no good basis as a scientific inference to infer to a specific designer, his ilk are quick to suggest hidden agenda Creatinist conspiracism. Sadly revealing. kairosfocus
PS: On abusive God of the gaps dismissals such as was resorted to by AF above, cf. 39 in the UD WAC's (NB: BA et al, the link has become corrupt):
39] ID is Nothing More Than a “God of the Gaps” Hypothesis Famously, when his calculations did not quite work, Newton proposed that God or angels nudged the orbiting planets every now and then to get them back into proper alignment. Later scientists were able to show that the perturbations of one planet acting on another are calculable and do not in aggregate skew the calculations. Newton’s error is an example of the “God of the gaps” fallacy – if we do not understand it, God must have done it. ID is not proposing “God” to paper over a gap in current scientific explanation. Instead ID theorists start from empirically observed, reliable, known facts and generally accepted principles of scientific reasoning: (a) Intelligent designers exist and act in the world. (b) When they do so, as a rule, they leave reliable signs of such intelligent action behind. (c) Indeed, for many of the signs in question such as CSI and IC, intelligent agents are the only observed cause of such effects, and chance + necessity (the alternative) is not a plausible source, because the islands of function are far too sparse in the space of possible relevant configurations. (d) On the general principle of science, that “like causes like,” we are therefore entitled to infer from sign to the signified: intelligent action. (e) This conclusion is, of course, subject to falsification if it can be shown that undirected chance + mechanical forces do give rise to CSI or IC. Thus, ID is falsifiable in principle but well supported in fact. In sum, ID is indeed a legitimate scientific endeavor: the science that studies signs of intelligence.
kairosfocus
F/N: Elzinga adds to the "Ipse Dixit"-ism mix over at TSZ, Dec 20th, in the same thread:
ID/creationists believe that evolution violates the second law of thermodynamics. It is the fundamental misconception that underlies all ID/creationist denial and all ID/creationist “theory.” There is not one ID/creationist that can pass a basic concept test on entropy and the second law [--> Notice the strawman tactic stereotyping and guilt by association tactic, where Design thought and Creationism are equated with all sorts of onward conspiracy mongering being alluded to; cf UD WAC here on in reply . . . ]. And it is this fundamental misconception that genetically links the ID crowd directly to the “scientific” creationist crowd. It is a clear genetic marker that they just can’t hide. [--> Actually, as will be shown below, this just shows that ME has not done basic homework, as in kindly tell us the title of the paper of 1979 by that notorious Creation Scientist -- NOT -- J S Wicken in which he stated, "organized systems must be assembled element by element according to an [[originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. [--> this is the source of the descriptive abbreviations, FSCI and FSCO/I . . . . ] It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’"; cf. below.] It is a marker that is even more robust that cdesign proponentsists.
This aptly further illustrates the pattern of projection, ad hominem laced dismissive declarations and failure to address serious substantial concerns on the merits. As a first note, the allusion to the Discovery Institute late 90's promotional argument by Johnson on how sound scientific research multiplied by addressing the sociocultural agendas of radical evolutionary materialism dressed up in the lab coat, is illustrative of how a tendentious and materially misleading, irresponsible fear-mongering talking point narrative is used by objectors to design theory to poison the atmosphere and cloud issues through false and/or misleading assertions endlessly repeated drumbeat style as though such urban legends were well substantiated fact. This tells us a lot about the level of thinking we are dealing with. However, there is a technical issue on the table, thermodynamics: 1 --> Evidently, Elzinga has first of all not troubled to seriously read the very first technical design theory book, TMLO by Thaxton (a PhD Chemist . . . ) et al, chs 7 and 8. Let me clip the transitional comment at the end of Ch 7, after a discussion that largely focussed on classical thermodynamics and Gibbs Free energy in particular:
While the maintenance of living systems is easily rationalized in terms of thermodynamics, the origin of such living systems is quite another matter. Though the earth is open to energy flow from the sun, the means of converting this energy into the necessary work to build up living systems from simple precursors remains at present unspecified (see equation 7-17). The "evolution" from biomonomers of to fully functioning cells is the issue. Can one make the incredible jump in energy and organization from raw material and raw energy, apart from some means of directing the energy flow through the system? In Chapters 8 and 9 we will consider this question, limiting our discussion to two small but crucial steps in the proposed evolutionary scheme namely, the formation of protein and DNA from their precursors. It is widely agreed that both protein and DNA are essential for living systems and indispensable components of every living cell today.11 Yet they are only produced by living cells. Both types of molecules are much more energy and information rich than the biomonomers from which they form. Can one reasonably predict their occurrence given the necessary biomonomers and an energy source? Has this been verified experimentally? These questions will be considered . . .
2 --> This comes after a responsible treatment of the thermodynamics involved, and in effect highlights the pivotal issue, which I took up first in my own discussion on thermodynamics considerations in Appendix 1 my always linked note (click on my handle in the LH column . . . this has been present for every post I have ever made as a comment at UD). 3 --> Which, effectively starts from a 101 on thermodynamics, including deducing what "raw energy" implies and also the effect of such an injection in Clausius' example used to deduce the second law. Namely, as the subtraction on transfer of d'Q of heat from a body at T_hot to another at T_cold shows, the RISE in entropy of the importing body (using the ratio d'Q/T) will overwhelm the loss from the exporting one, leading to an overall increase), i.e. right from the beginning, it was well understood that importation of raw energy tends to increase entropy. 4 --> Further to this, in Section A of the same discussion, I explained what information is, and identified functionally specific, complex information [FSCI] as a pivotal concept in design thought. In so doing, I pointed out the rise and increasing acceptance of the informational perspective on thermodynamics and particularly entropy, using Harry Robertson's Statistical Thermophysics as a pivot. In so doing, I pointed out the following understanding of what entropy means, from Wiki's article on Entropy and Information (I clip the current rendering):
At an everyday practical level the links between information entropy and thermodynamic entropy are not evident. Physicists and chemists are apt to be more interested in changes in entropy as a system spontaneously evolves away from its initial conditions, in accordance with the second law of thermodynamics, rather than an unchanging probability distribution. And, as the minuteness of Boltzmann's constant kB indicates, the changes in S/kB for even tiny amounts of substances in chemical and physical processes represent amounts of entropy which are extremely large compared to anything seen in data compression or signal processing. Furthermore, in classical thermodynamics the entropy is defined in terms of macroscopic measurements and makes no reference to any probability distribution, which is central to the definition of information entropy. At a multidisciplinary level, however, connections can be made between thermodynamic and informational entropy, although it took many years in the development of the theories of statistical mechanics and information theory to make the relationship fully apparent. In fact, in the view of Jaynes (1957), thermodynamic entropy, as explained by statistical mechanics, should be seen as an application of Shannon's information theory: the thermodynamic entropy is interpreted as being proportional to the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a description solely in terms of the macroscopic variables of classical thermodynamics, with the constant of proportionality being just the Boltzmann constant. For example, adding heat to a system increases its thermodynamic entropy because it increases the number of possible microscopic states for the system, thus making any complete state description longer. (See article: maximum entropy thermodynamics). Maxwell's demon can (hypothetically) reduce the thermodynamic entropy of a system by using information about the states of individual molecules; but, as Landauer (from 1961) and co-workers have shown, to function the demon himself must increase thermodynamic entropy in the process, by at least the amount of Shannon information he proposes to first acquire and store; and so the total thermodynamic entropy does not decrease (which resolves the paradox). Landauer's principle has implications on the amount of heat a computer must dissipate to process a given amount of information, though modern computers are nowhere near the efficiency limit.
5 --> As my Appendix 1 discusses, ever since Brillouin in the 50's - 60's, that link has been spotted and discussed, at some level. Indeed, that famous physicist introduced the idea that information is negentropy. The informational Thermodynamics view traces to Jaynes, of the same general era. 6 --> We can now pick up the concept of work, viewed as ordered application of force that displaces the object at its point of application along its line of action, quantified on the increment F*dx. 7 --> Clearly, a pattern of such ordered displacements can create complex, functional organisation, whether at macro or micro levels. Thence, my discussion in the same Appendix 1, of a vat full of micro-jet parts sufficiently small to take part in Brownian motion, and the challenge of countering diffusion through blind chance and mechanical necessity vs applying as a thought exercise, an army of intelligently directed nano-machines that force the assembly of the Jet based on the designed pattern of nodes and arcs. 8 --> It is easy to see that for an object of complexity such that the nodes and arcs Wicken wiring diagram patterns [cf his 1979 remark on that topic, which is the actual root of the descriptive terms and initials FSCI and FSCO/I . . . . ] to create a flyable jet involve at least 500 - 1,000 structured yes/no questions [= 500 - 1,000 bits], the atomic resources of our solar system or of the observed cosmos as a whole will be blatantly inadequate for us to expect that such a structure would spontaneously emerge through blind forces. But, FSCO/I rich entities are routinely created by planned, deliberate work. In short, counterflow leading to FSCO/I is a strong sign of deliberate work according to an organising plan. 9 --> In the UD ID Foundations series, this general topic came up at no 2 in the main series, Jan 23, 2011, here on. Notice, this from Dembski, on the point:
. . .[From commonplace experience and observation, we may see that:] (1) A designer conceives a purpose. (2) To accomplish that purpose, the designer forms a plan. (3) To execute the plan, the designer specifies building materials and assembly instructions. (4) Finally, the designer or some surrogate applies the assembly instructions to the building materials. (No Free Lunch, p. xi. HT: ENV.)
10 --> This is in fact on billions of test cases, the only actually observed source of FSCO/I [take this as meaning complex, information-rich, functional organisation that is dependent on specific arrangement and coupling of parts, to achieve function in some relevant way, e.g. a car engine or even a string of ASCII symbols expressing a blog post in English], and it is reasonable then to infer on billions of test cases and the analysis outlined above that such FSCO/I is a sign of design. 11 --> That is the context in which I argued in that post:
As fig. A shows, open systems can indeed readily — but, alas, temporarily — increase local organisation by importing energy from a “source” and doing the right kind of work. But, generally only in a context of guiding information based on an intent or program, or its own functional organisation, and at the expense of exhausting compensating disorder to some “sink” or other. (NB: here, something like a timing belt and set of cams is a program.) 4 –> Heat –in short: energy moving between bodies due to temperature difference, by radiation, convection or conduction – cannot wholly be converted to work. (Here, the radiant energy flowing out from our sun’s surface at some 6,000 degrees Celsius to earth at some 15 degrees Celsius, average, is a form of heat.) 5 –> Physically, by definition: work is done when applied forces impart motion along their lines of action to their points of application, e.g. when we lift a heavy box to put it on a shelf, we do work. For force F, and distance along line of motion dx, the work is: dW = F*dx, . . . where, strictly * denotes a “dot product” 6 –> But, that definition does not say anything about whether or not the work is constructive — a tornado ripping off a roof and flying its parts for a mile to land elsewhere has done physical work, but not constructive work. (Side-bar, constructive work is closely connected to the sort we get paid for: if your work is constructive, desirable and affordable, you get paid for it. [Hence, the connexion between energy use at a given general level of technology and the level of economic activity and national income.]) 7 –> Similarly, it says nothing about the origin of the energy conversion device. 8 –> When that device itself manifests functionally specific, complex organisation and associated information — FSCO/I (e.g. a gas engine- generator set or a solar PV panel, battery and wind turbine set, as opposed to, e.g. the natural law-dominated order exhibited by tornadoes or hurricanes as vortexes), we have good reason to infer that the conversion device was designed. (Side-bar: Now, there is arguably a link between increased information and reduction in degrees of microscopic freedom of distributing energy and mass. Where, entropy is best understood as a logarithmic measure of the number of ways energy and mass can be distributed under a given set of macro-level constraints like pressure, temperature, magnetic field, etc.: s = k ln w, k being Boltzmann’s constant and w the number of “ways.” Jaynes therefore observed, aptly [but somewhat controversially]: “The entropy of a thermodynamic system is a measure of the degree of ignorance of a person whose sole knowledge about its microstate consists of the values of the macroscopic quantities . . . which define its [macro-level observable] thermodynamic state. This is a perfectly ‘objective’ quantity . . . There is no reason why it cannot be measured in the laboratory.”[Cited, Harry Robertson, Statistical Thermophysics, Prentice Hall, 1993, p. 36.] This connects fairly directly to the information as negentropy concept of Brillouin and Szilard, but that is not our focus here, which is instead on the credible source/cause of energy conversion devices exhibiting FSCO/I. As this thought experiment shows [cf.TMLO chs 8 & 9], the correct assembly of such from microscopic components scattered at random in a vat or a pond would indeed drastically reduce entropy and increase the functionality [which would define an observable functional state], but the basic message is that since the scattered microstates so overwhelm the clumped then the functional ones, it is maximally unlikely that such would ever happen spontaneously. Nor, would heating up the pond or striking it with lightning or the like be likely to help matters out. Just as, we normally observe an ink spot dropped in a vat diffusing throughout the vat, not collecting back together again. In short, to produce complex, specific organisation to achieve function, the most credible path is to assemble co-ordinated, well-matched parts according to a known good plan.) 9 –> The reasonableness of the inference from observing a high-FSCO/I energy converter to its having been designed would be sharply multiplied when the device in question is part of a von Neuman, self-replicating automaton [vNSR]:
12 --> vNSR? yes, the living cell is a self-replicating metabolic automaton that therefore involves the following components in addition to the general organised and highly complex, specific functions it carries out:
10 –> Here, we see a machine that not only functions in its own behalf but has the ADDITIONAL — that is very important — capacity of self replication based on stored specifications, which requires: (i) an underlying storable code to record the required information to create not only (a) the primary functional machine [here, for a "clanking replicator" as illustrated, a Turing-type “universal computer”; in a cell this would be the metabolic entity that transforms environmental materials into required components etc.] but also (b) the self-replicating facility; and, that (c) can express step by step finite procedures for using the facility; (ii) a coded blueprint/tape record of such specifications and (explicit or implicit) instructions, together with (iii) a tape reader that reads and interprets the coded specifications and associated instructions; thus controlling: (iv) position-arm implementing machines with “tool tips” controlled by the tape reader and used to carry out the action-steps for the specified replication; backed up by (v) either: (1) a pre-existing reservoir of required parts and energy sources, or (2) associated “metabolic” machines carrying out activities that as a part of their function, can provide required specific materials/parts and forms of energy for the replication facility, by using the generic resources in the surrounding environment. 11 –> Also, parts (ii), (iii) and (iv) are each necessary for and together are jointly sufficient to implement a self-replicating machine with an integral von Neumann universal constructor. 12 –> That is, we see here an irreducibly complex set of core components that must all be present in a properly organised fashion for a successful self-replicating machine to exist. [Take just one core part out, and self-replicating functionality ceases: the self-replicating machine is irreducibly complex (IC).] 13 –> This irreducible complexity is compounded by the requirement (i) for codes, requiring organised symbols and rules to specify both steps to take and formats for storing information, and (v) for appropriate material resources and energy sources. 14 –> Immediately, we are looking at islands of organised function for both the machinery and the information in the wider sea of possible (but mostly non-functional) configurations. 15 –> In short, outside such functionally specific — thus, isolated — information-rich hot (or, “target”) zones, want of correct components and/or of proper organisation and/or co-ordination will block function from emerging or being sustained across time from generation to generation. 16 –> So, we may conclude: once the set of possible configurations of relevant parts is large enough and the islands of function are credibly sufficiently specific/isolated, it is unreasonable to expect such function to arise from chance, or from chance circumstances driving blind natural forces under the known laws of nature.
13 --> Notice, how the vNSR involved in the living cell directly implicates coded, digital information, which -- as is notorious since the following letter from Crick to his son on March 19, 1953, has long been understood to be involved in how DNA functions:
"Now we believe that the DNA is a code. That is, the order of bases (the letters) makes one gene different from another gene (just as one page of print is different from another)"
14 --> This is of course a main reason why OOL is a pivotal context in which design theory has arisen in the context of the world of life, starting from TMLO. (That is not to say that the origin of increments in FSCO/I in making major body plans is not a significant consideration.) But at OOL, which is the ROOT of the Darwinian tree of life -- and no roots, no shoots, branches or twigs is the obvious issue -- the favourite "out" of appealing to differential reproductive success of sub-populations in ecological niches [i.e. natural selection in one form or another] is off the table. For, the origin of the vNSR required for that is most centrally on the table. 15 --> In this light, let us listen to Wicken:
Organized’ systems are to be carefully distinguished from ‘ordered’ systems. Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [[i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [[originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. [--> Notice the roots of the term FSCO/I] It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [[“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65. (Emphases and notes added. Nb: “originally” is added to highlight that for self-replicating systems, the blue print can be built-in.)]
16 --> Now, strike out "selection" and ask, what is left on the table as a credible explanation for FSCO/I? 17 --> Also, just what areas of thought are being integrated here by that notorious Creationist -- NOT -- J S Wicken, again? Let us read the title:
The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion
18 --> But, but, but, we thought this linkage was a sure signature of those notorious and dangerous Creationists in action. It seems, not. OOPS. 20 --> And, it is clear from the above issues, that design sits at the table of scientific explanations as of right, not by grudging sufferance. Starting, here, with the origin of life. And if this is there at the root, there is no good reason to impose a priori materialism disguised as mere methodological constraints and "reasonable" redefinitions of science and its methods -- scientific explanations MUST be naturalistic -- it sits as of right as a credible explanation all the way from microbes to Mozart. ____________ "Ipse Dixit"-ism as a rebuttal to design theory fails, yet again. KF kairosfocus
Right Alan, it's all nonsensical. Design, purpose, function, configuration, specification, information, agency, consciousness, complexity, sophistication, and much much more, are all easily dismissed and waved away. You're well within your rights to scoff endlessly about every challenging subject, and consistently refuse to stake out any claims beyond reductive absurdity. Functional isolation? Ridiculous! Design? Illusory! Complexity? Ignorant! Specification? Bafflegab! Evolution by unguided processes? Self-evidently true! Yes, I've practically summed you up in a single comment. :lol: Oh, and let's not forget the self-explanatory floating link: Clown noses Chance Ratcliff
lifepsy, as well you should be skeptical,,, it is simply preposterous to think that such unfathomable functional complexity arose from anything other than blind undirected processes. :) Thank goodness Fox and company are here to keep us 'scientific',,, Nothing To See Here - video (if there is any question, Fox is the one handling crowd control) http://www.youtube.com/watch?v=rSjK2Oqrgic bornagain77
...that’s a superb refutation of functional isolation...
What's to refute? It's gobbledegook. And would still only be a "God of the gaps" argument. Default to "design". AF: Kindly do your homework next time, before resorting to more dismissive "Ipse Dixit"-ism as a substitute for substantial reply on the merits. KF Alan Fox
Oops Ratcliff Alan Fox
Thanks for pointing that out, Radcliffe Reification Alan Fox
BA77, I'm not convinced. I mean... besides the reams of rational evidence-based argumentation, and actual peer-reviewed experimental empirical demonstrations of limits to neo-darwinian mechanisms, what "mysterious barrier" is really preventing a fish+400 million years from turning into a human? lifepsy
Alan @1, that's a superb refutation of functional isolation. You should copy and paste that into a YouTube comment, that is if you didn't already copy and paste it from one. But fix your hyperlink first, or your sloppy commenting might be confused for a lousy argument. Chance Ratcliff
further notes:
Mammalian overlapping genes: the comparative perspective. - 2004 Excerpt: it is rather surprising that a large number of genes overlap in the mammalian genomes. Thousands of overlapping genes were recently identified in the human and mouse genomes. However, the origin and evolution of overlapping genes are still unknown. We identified 1316 pairs of overlapping genes in humans and mice and studied their evolutionary patterns. It appears that these genes do not demonstrate greater than usual conservation. Studies of the gene structure and overlap pattern showed that only a small fraction of analyzed genes preserved exactly the same pattern in both organisms. http://www.ncbi.nlm.nih.gov/pubmed/14762064 Doubling the information from the Double Helix - April 27, 2012 Excerpt: The study’s findings have shown that two microRNA genes with different functions can be produced from the same piece (sequence) of DNA — one is produced from the top strand and another from the bottom complementary ‘mirror’ strand. Specifically, the research has shown that a single piece of human DNA gives rise to two fully processed microRNA genes that are expressed in the brain and have different and previously unknown functions. One microRNA is expressed in the parts of nerve cells that are known to control memory function and the other microRNA controls the processes that move protein cargos around nerve cells.,,, Helen Scott and Joanna Howarth, the lead authors on the study, added: “We have now found that both sides of the double helix can each produce a microRNA. These two microRNAs are almost a perfect mirror of each other, but due to slight differences in their sequence, they regulate different sets of protein producing RNAs, which will in turn affect different biological functions. Such mirror-miRNAs are likely to represent a new group of microRNAs with complex roles in coordinating gene expression, doubling the capacity of regulation.” http://phys.org/news/2012-04-helix.html DNA Caught Rock 'N Rollin': On Rare Occasions DNA Dances Itself Into a Different Shape - January 2011 Excerpt: Because critical interactions between DNA and proteins are thought to be directed by both the sequence of bases and the flexing of the molecule, these excited states represent a whole new level of information contained in the genetic code, http://www.sciencedaily.com/releases/2011/01/110128104244.htm Multidimensional Genome – Dr. Robert Carter – 10 minute video http://www.metacafe.com/watch/8905048/ The next evolutionary synthesis: Jonathan BL Bard Excerpt: We now know that there are at least 50 possible functions that DNA sequences can fulfill [8], that the networks for traits require many proteins and that they allow for considerable redundancy [9]. The reality is that the evolutionary synthesis says nothing about any of this; for all its claim of being grounded in DNA and mutation, it is actually a theory based on phenotypic traits. This is not to say that the evolutionary synthesis is wrong, but that it is inadequate – it is really only half a theory! http://www.biosignaling.com/content/pdf/1478-811X-9-30.pdf Systems biology: Untangling the protein web - July 2009 Excerpt: Vidal thinks that technological improvements — especially in nanotechnology, to generate more data, and microscopy, to explore interaction inside cells, along with increased computer power — are required to push systems biology forward. "Combine all this and you can start to think that maybe some of the information flow can be captured," he says. But when it comes to figuring out the best way to explore information flow in cells, Tyers jokes that it is like comparing different degrees of infinity. "The interesting point coming out of all these studies is how complex these systems are — the different feedback loops and how they cross-regulate each other and adapt to perturbations are only just becoming apparent," he says. "The simple pathway models are a gross oversimplification of what is actually happening." http://www.nature.com/nature/journal/v460/n7253/full/460415a.html
etc.. etc.. etc.. bornagain77
further notes:
Circular RNAs: A Hidden, Parallel Universe - Cornelius Hunter PhD. - March 2, 2013 Excerpt: Recall that protein-coding genes, in addition to coding for an incredible protein machine, may also contain several more layers of information encoding signals for the transcript (mRNA) stability, mRNA editing, DNA copy error correction, the speed of translation, the protein’s three-dimensional protein structure, the stability of that structure, the multiple functions of the protein, interactions of the protein with other proteins, instructions for transport, avoiding an amyloid state, any other genes that overlap with the gene, and controlling tRNA selection which can help to respond to different environmental conditions. That is a tall order and now we have yet another layer of information for which genes much encode: circular RNA macromolecules which just happen to interact with microRNA and which just happen to be expressed at the right time, because if they are expressed at the wrong time you don’t have a normal brain. And amazingly, in protein-coding genes, circular RNA macromolecules may be encoded both in the antisense strand and in the sense strand. In fact numerous circular RNAs form by head-to-tail splicing of exons.,,, http://darwins-god.blogspot.com/2013/03/circular-rnas-hidden-parallel-universe.html "Complexity Brake" Defies Evolution - August 2012 Excerpt: Physicists can use statistics to describe a homogeneous system like an ideal gas, because one can assume all the member particles interact the same. Not so with life. When describing heterogeneous systems each with a myriad of possible interactions, the number of discrete interactions grows faster than exponentially. Koch showed how Bell's number (the number of ways a system can be partitioned) requires a comparable number of measurements to exhaustively describe a system. Even if human computational ability were to rise exponentially into the future (somewhat like Moore's law for computers), there is no hope for describing the human "interactome" -- the set of all interactions in life. "This is bad news. Consider a neuronal synapse -- the presynaptic terminal has an estimated 1000 distinct proteins. Fully analyzing their possible interactions would take about 2000 years. Or consider the task of fully characterizing the visual cortex of the mouse -- about 2 million neurons. Under the extreme assumption that the neurons in these systems can all interact with each other, analyzing the various combinations will take about 10 million years..., even though it is assumed that the underlying technology speeds up by an order of magnitude each year. " Even with shortcuts like averaging, "any possible technological advance is overwhelmed by the relentless growth of interactions among all components of the system," Koch said.,, Why can't we use the same principles that describe technological systems? Koch explained that in an airplane or computer, the parts are "purposefully built in such a manner to limit the interactions among the parts to a small number." The limited interactome of human-designed systems avoids the complexity brake. "None of this is true for nervous systems.",,, to read more go here: http://www.evolutionnews.org/2012/08/complexity_brak062961.html Unexpectedly small effects of mutations in bacteria bring new perspectives - November 2010 Excerpt:,,, using extremely sensitive growth measurements, doctoral candidate Peter Lind showed that most mutations reduced the rate of growth of bacteria by only 0.500 percent. No mutations completely disabled the function of the proteins, and very few had no impact at all. Even more surprising was the fact that mutations that do not change the protein sequence had negative effects similar to those of mutations that led to substitution of amino acids. A possible explanation is that most mutations may have their negative effect by altering mRNA structure, not proteins, as is commonly assumed. http://www.physorg.com/news/2010-11-unexpectedly-small-effects-mutations-bacteria.html The Majority of Animal Genes Are Required for Wild-Type Fitness. Cell. - Ramani, A. K. et al. 2012. - 148 (4): 792-802. Excerpt: Whereas previous studies typically assess phenotypes that are detectable by eye after a single generation, we monitored growth quantitatively over several generations. In contrast to previous estimates, we find that, in these multigeneration population assays, the majority of genes affect fitness, and this suggests that genetic networks are not robust to mutation. Our results demonstrate that, in a single environmental condition, most animal genes play essential roles. This is a higher proportion than for yeast genes, and we suggest that the source of negative selection is different in animals and in unicellular eukaryotes. http://www.icr.org/article/7166/ Epistasis between Beneficial Mutations - July 2011 Excerpt: We found that epistatic interactions between beneficial mutations were all antagonistic—the effects of the double mutations were less than the sums of the effects of their component single mutations. We found a number of cases of decompensatory interactions, an extreme form of antagonistic epistasis in which the second mutation is actually deleterious in the presence of the first. In the vast majority of cases, recombination uniting two beneficial mutations into the same genome would not be favored by selection, as the recombinant could not outcompete its constituent single mutations. https://uncommondescent.com/epigenetics/darwins-beneficial-mutations-do-not-benefit-each-other/ Mutations : when benefits level off - June 2011 - (Lenski's e-coli after 50,000 generations which approximately 1 million years of supposed human evolution) Excerpt: After having identified the first five beneficial mutations combined successively and spontaneously in the bacterial population, the scientists generated, from the ancestral bacterial strain, 32 mutant strains exhibiting all of the possible combinations of each of these five mutations. They then noted that the benefit linked to the simultaneous presence of five mutations was less than the sum of the individual benefits conferred by each mutation individually. http://www2.cnrs.fr/en/1867.htm?theme1=7 A Serious Problem for Darwinists: Epistasis Decreases Chances of Beneficial Mutations - November 8, 2012 Excerpt: A recent paper in Nature finds that epistasis (interactions between genetic changes) is much more pervasive than previously assumed. This strongly limits the ability of beneficial mutations to confer fitness on organisms. ,,, It takes an outsider to read this paper and see how disturbing it should be to the consensus neo-Darwinian theory. All that Darwin skeptics can do is continue to point to papers like this as severe challenges to the consensus view. Perhaps a few will listen and take it seriously. http://www.evolutionnews.org/2012/11/epistasis_decr066061.html
bornagain77
FSCO/I has been adequately described, quantified, exemplified, empirically tested as a reliable sign of design any number of times...
Adequately described? Adequately? On whose judgement? And where can this adequate description be found? Quantified? Are you seriously claiming you can quantify FSCO/I with regard to a biological example? Surely the way to confirm this assertion is to demonstrate that you can indeed do what you claim. Exemplified? Is it too much to ask for a cut and paste of or a link to this example. (let's hope it is a real biological example.) Empirically tested as a reliable sign of design? What on Earth would a sign of design be other than "it sure looks designed to me"? And how do your "empirical" tests work and how and on what were they conducted? Any number of times? So how hard can it be to tell us what this empirical testing consisted of? _________ AF, cf 15 ff below. Your questions above are answered -- have long been answered -- here on at IOSE, which as can be easily seen, I wrote. You should be well aware of this 101, given your 8 years of observing the ID debates. KF Alan Fox
lifepsy, if I may offer a bit more to,,,
What mysterious barrier do IDers think prevents microevolutionary change from accumulating until it becomes macroevolution?
Well that 'mysterious barrier' that 'prevents microevolutionary change from accumulating until it becomes macroevolution', once you understand it, is not really that mysterious of a barrier at all,,, Poly-Functional Complexity equals Poly-Constrained Complexity The primary problem that poly-functional complexity presents for neo-Darwinism, or even Theistic Evolutionists is this: To put it plainly, the finding of a severely poly-functional/poly-constrained genome by the ENCODE study, and further studies, has put the odds, of what was already astronomically impossible (of finding a functional protein in sequence space), to what can only be termed, for lack of better words, fantastically astronomically impossible,, i.e. instead of the infamous “Methinks it is like a weasel” single element of functional information that Darwinists pretend they are facing in any evolutionary search for a functional sequence, where single letters can be changed without effecting anything but the particular sequence,, we would actually be encountering something more akin to this illustration found on page 141 of the book Genetic Entropy by Dr. Sanford in our search for a functionally meaningful sequence in biology.
S A T O R A R E P O T E N E T O P E R A R O T A S http://en.wikipedia.org/wiki/Sator_Square
Which is translated ; THE SOWER NAMED AREPO HOLDS THE WORKING OF THE WHEELS. This ancient puzzle, which dates back to 79 AD, reads the same four different ways, Thus, If we change (mutate) any letter we may get a new meaning for a single reading read any one way, as in Dawkins' weasel program, but we will consistently destroy the other 3 readings of the message with the new mutation (save for the center). This is what is meant when it is said a poly-functional genome is poly-constrained to any random mutations and is thus a 'mysterious barrier' that 'prevents microevolutionary change from accumulating until it becomes macroevolution.' Notes:
The Extreme Complexity Of Genes – Dr. Raymond G. Bohlin - video http://www.metacafe.com/watch/8593991/ Astonishing DNA complexity update Excerpt: (ENCODE revealed) The untranslated regions (now called UTRs, rather than ‘junk’) are far more important than the translated regions (the genes), as measured by the number of DNA bases appearing in RNA transcripts. Genic regions are transcribed on average in five different overlapping and interleaved ways, while UTRs are transcribed on average in seven different overlapping and interleaved ways. Since there are about 33 times as many bases in UTRs than in genic regions, that makes the ‘junk’ about 50 times more active than the genes. http://creation.com/astonishing-dna-complexity-update Dual-Coding Genes in Mammalian Genomes - 2007 Excerpt: A textbook human gene encodes a protein using a single reading frame. Alternative splicing brings some variation to that picture, but the notion of a single reading frame remains. Although this is true for most of our genes, there are exceptions. Like viral counterparts, some eukaryotic genes produce structurally unrelated proteins from overlapping reading frames. The examples are spectacular (G-protein alpha subunit [Gnas1] or INK4a tumor suppressor), but scarce. The scarcity is anthropogenic in origin: we simply do not believe that dual-coding genes can occur in eukaryotes. To challenge this assumption, we performed the first genome-wide scan for mammalian genes containing alternative reading frames located out of frame relative to the annotated protein-coding region. Using a newly developed statistical framework, we identified 40 such genes. Because our approach is very conservative, this number is likely a significant underestimate, and future studies will identify more alternative reading frame–containing genes with fascinating biology. http://www.plosone.org/article/info:doi/10.1371/journal.pcbi.0030091 A genome-wide study of dual coding regions in human alternatively spliced genes. - 2006 Excerpt: Alternative splicing is a major mechanism for gene product regulation in many multicellular organisms. By using different exon combinations, some coding regions can encode amino acids in multiple reading frames in different transcripts. Here we performed a systematic search through a set of high-quality human transcripts and show that approximately 7% of alternatively spliced genes contain dual coding regions. http://www.ncbi.nlm.nih.gov/pubmed/16365380 Time to Redefine the Concept of a Gene? – Sept. 10, 2012 Excerpt: As detailed in my second post on alternative splicing, there is one human gene that codes for 576 different proteins, and there is one fruit fly gene that codes for 38,016 different proteins! While the fact that a single gene can code for so many proteins is truly astounding, we didn’t really know how prevalent alternative splicing is. Are there only a few genes that participate in it, or do most genes engage in it? The ENCODE data presented in reference 2 indicates that at least 75% of all genes participate in alternative splicing. They also indicate that the number of different proteins each gene makes varies significantly, with most genes producing somewhere between 2 and 25. Based on these results, it seems clear that the RNA transcripts are the real carriers of genetic information. This is why some members of the ENCODE team are arguing that an RNA transcript, not a gene, should be considered the fundamental unit of inheritance. http://networkedblogs.com/BYdo8 Scientists Map All Mammalian Gene Interactions – August 2010 Excerpt: Mammals, including humans, have roughly 20,000 different genes.,,, They found a network of more than 7 million interactions encompassing essentially every one of the genes in the mammalian genome. http://www.sciencedaily.com/releases/2010/08/100809142044.htm Insight into cells could lead to new approach to medicines - 2010 Excerpt: Scientists expected to find simple links between individual proteins but were surprised to find that proteins were inter-connected in a complex web. Dr Victor Neduva, of the University of Edinburgh, who took part in the study, said: "Our studies have revealed an intricate network of proteins within cells that is much more complex than we previously thought. http://www.physorg.com/news196402353.html The Complexity of Gene Expression, Protein Interaction, and Cell Differentiation - Jill Adams, Ph.D. - 2008 Excerpt: it seems that a single protein can have dozens, if not hundreds, of different interactions,,, In a commentary that accompanied Stumpf's article, Luis Nunes Amaral (2008) wrote, "These numbers provide a sobering view of where we stand in our cataloging of the human interactome. At present, we have identified less than 0.3% of all estimated interactions among human proteins. We are indeed at the dawn of systems biology." http://www.nature.com/scitable/topicpage/the-complexity-of-gene-expression-protein-interaction-34575
bornagain77
What mysterious barrier do IDers think prevents microevolutionary change from accumulating until it becomes macroevolution?
The barrier is the lack of a viable mechanism, obviously. Why can I jump 3 feet in the air but not 300? It's not rocket science. Neo-Darwinian proccesses have been demonstrated to be inadequate function producers even under the most favorable conditions. Simply holding to superstitious beliefs that "Time & Chance make all things possible" seems to be somewhat of a questionable scientific practice. And the word "microevolution" is very unfortunate as it implies macroevolution is possible and has actually occurred. That's probably the source of a great deal of confusion in the evolutionist camps. lifepsy
AF: As for an alleged fallacy of reification, the issue is that configuration spaces are cut down from phase or state spaces [by eliminating momentum], phase spaces being a pattern of modelling that has been used in Physics since Gibbs et al in C19, in the context of grounding statistical thermodynamics (and, yes, this is about thermodynamics that we are supposedly ignorant of . . . ). Abstract collections of configurations have long been a part of relevant analysis, and if you were serious, you could start with the space of possible arrangements in a string of 899 ASCII characters, which is where I did. KF PS: Let me add from Wiki:
In mathematics and physics, a phase space is a space in which all possible states of a system are represented, with each possible state of the system corresponding to one unique point in the phase space. For mechanical systems, the phase space usually consists of all possible values of position and momentum variables (i.e. the cotangent space of configuration space). The concept of phase space was developed in the late 19th century by Ludwig Boltzmann, Henri Poincaré, and Willard Gibbs.[1] . . . . In a phase space, every degree of freedom or parameter of the system is represented as an axis of a multidimensional space; a one-dimensional system is called a phase line, while a two-dimensional system is called a phase plane. For every possible state of the system, or allowed combination of values of the system's parameters, a point is plotted in the multidimensional space. [--> I tend to think in terms of an n-member vector for n degrees of freedom, e,g. the 899 member ASCII code string has 899 elements, each with 128 possible states; this can be extended WLOG to general systems that can be represented by a nodes and arcs wiring diagram, as such can be reduced to strings as is done with AutoCAD etc. The "spatial" picture emerges once we insert the concept of [extended] Hamming distance between points in the n-dimensional space rooted in digitwise differences in value . . . ] Often this succession of plotted points is analogous to the system's state evolving over time. In the end, the phase diagram represents all that the system can be, and its shape can easily elucidate qualities of the system that might not be obvious otherwise. A phase space may contain a great many dimensions. For instance, a gas containing many molecules may require a separate dimension for each particle's x, y and z positions and momenta as well as any number of other properties.
kairosfocus
And you wonder why people don't bother to read what you write! Islands of function; refication" ________ Ad hominem, not a substantial response. And, FYI, FSCO/I has been adequately described, quantified, exemplified, empirically tested as a reliable sign of design any number of times, just there seems to be a pretence on your part that by ignoring, strawmannising and making dismissive talking points, you can make reality conform to your ideology. I guess that speaks to where evolutionary materialism rooted radical relativism leads, and it is not pretty. KF Alan Fox

Leave a Reply