Uncommon Descent Serving The Intelligent Design Community

Who really understands what an island of function is or is not?

Categories
ID Foundations
specified complexity
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

Earlier today, I decided to check back at TSZ, to see if they have recovered from the recent regrettable hack attack. They are back up, at least in part. The following however, caught my eye:

Intelligent design proponents make a negative argument for design.  According to them, the complexity and diversity of life cannot be accounted for by unguided evolution (henceforth referred to simply as ‘evolution’) or any other mindless natural process.  If it can’t be accounted for by evolution, they say, then we must invoke design . . . .

What mysterious barrier do IDers think prevents microevolutionary change from accumulating until it becomes macroevolution?  It’s the deep blue sea, metaphorically speaking.  IDers contend that life occupies ‘islands of function’ separated by seas too broad to be bridged by evolution.

In this post (part 2a) I’ll explain the ‘islands of function’ metaphor and invite commenters to point out its strengths and weaknesses.  In part 2b I’ll explain why the ID interpretation of the metaphor is wrong, and why evolution is not stuck on ‘islands of function’.

This is quite wrong-headed, and easily explains part of why there is so little progress in exchanges:

1 –> The design inference is a positive inference on well tested, inductively established sign, not a negative inference.  For instance, the functionally specific, complex information [FSCO/I]  — notice the blend of complexity with specificity to achieve function — in the above clip is diagnostic of design as its most credible source. Something that is easily empirically verified on a base of literally billions of cases. (And there are no credible known exceptions, or that would have been trumpeted to the highest heavens all over the Web and in the literature.)

2 –> The similar inductive status of the island of function effect can also easily be shown from this text. There are a great many ways in which the 899 ASCII characters used in the above clip can be arranged: 128^899 ~ 2.41 *10^1894. (The number of Planck-time states of the 10^80 or so atoms of our observed cosmos since its credible beginning is less than 10^150, a very large number, but one that is utterly dwarfed by the set of possibilities for 899 ASCII characters.) Very few of them would convey the above message in recognisable English and while some noise — such as typos etc — can be tolerated, all too soon injection of random noise — a random walk on the island of function — would destroy function.

3 –> This is a simple illustration of a commonplace fact of life for complex, functionally specific entities made up from multiple, well-matched components that must be properly arranged and coupled together to achieve function. Taking our solar system as a zone of interest, the relevant components can be scattered in a great many ways indeed none of which will be functional. Even if clumped, a much smaller but still huge number of arrangements exists, the overwhelming majority of which possibilities will have no function.

4 –> Only in certain very special clusters of configurations (reflecting the amount of tolerance for configurations in a given neighbourhood) will there be functional configurations. So, we are at the issue that Dembski outlined long ago now, in No Free Lunch:

p. 148: “The great myth of contemporary evolutionary biology is that the information needed to explain complex biological structures can be purchased without intelligence. My aim throughout this book is to dispel that myth . . . . Eigen and his colleagues must have something else in mind besides information simpliciter when they describe the origin of information as the central problem of biology.

I submit that what they have in mind is specified complexity [[cf. here below], or what equivalently we have been calling in this Chapter Complex Specified information or CSI . . . .

Biological specification always refers to function . . . In virtue of their function [[a living organism’s subsystems] embody patterns that are objectively given and can be identified independently of the systems that embody them. Hence these systems are specified in the sense required by the complexity-specificity criterion . . . the specification can be cashed out in any number of ways [[through observing the requisites of functional organisation within the cell, or in organs and tissues or at the level of the organism as a whole] . . .”

p. 144: [[Specified complexity can be defined:] “. . . since a universal probability bound of 1 [[chance] in 10^150 corresponds to a universal complexity bound of 500 bits of information, [[the cluster] (T, E) constitutes CSI because T [[ effectively the target hot zone in the field of possibilities] subsumes E [[ effectively the observed event from that field], T is detachable from E, and and T measures at least 500 bits of information . . . ”

5 –> This sort of functional specificity brings out how the sort of functional cluster in view is informational, i.e. there is a specific pattern, a set of nodes and arcs that has to be arranged in a form or pattern that allows function, within a fairly narrow range of tolerance. That range of neighbouring functional configs defines an island of function. Where also WLOG, as a nodes and arcs pattern can be reduced to a structured string [how AutoCAD etc work] this can be translated into string structures, with as many degrees of freedom as there are relevant bits.

6 –> Nor is this sort of remark exactly news, on Dec 30 2011, I noted here at UD as follows (something that was actually adverted to in the TSZ thread, but was not taken seriously by objectors to design . . . ):

1 –> Complex, multi-part function depends on having several well-matched, correctly aligned and “wired together” parts that work together to carry out an overall task, i.e. we see apparently purposeful matching and organisation of multiple parts into a whole that carries out what seems to be a goal. The Junkers Jumo 004 Jet engine in the above image is a relevant case in point.

2 –> Ever since Wicken posed the following clip in 1979, this issue of wiring-diagram based complex functional organisation has been on the table as a characteristic feature of life forms that must be properly explained by any successful theory of the causal roots of life. Clip:

‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems.  Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [[i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [[originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [[“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65. (Emphases and notes added. Nb: “originally” is added to highlight that for self-replicating systems, the blue print can be built-in.)]

3 –> The question at stake in the thread excerpted from above, is whether there can be an effective, incremental culling-out based on competition for niches and thence reproductive success of sub-populations that will create ever more complex systems that will then appear to have been designed.

4 –> Of course, we must notice that the implication of this claim is that we are dealing with in effect a vast continent of possible functional forms that can be spanned by a gradually branching tree. That’s a big claim, and it needs to be warranted on observational evidence, or it becomes little more than wishful thinking and grand extrapolation in service to an a priori evolutionary materialistic scheme of thought.

5 –> I cases where the function in question has an irreducible core of necessary parts, it is often suggested that something that may have had another purpose may simply find itself duplicated or fall out of use, then fit in with a new use. “Simple.”

6 –> NOT. For, such a proposal faces a cluster of challenges highlighted earlier in this UD series as posed by Angus Menuge [oops!] for the case of the flagellum:

For a working [bacterial] flagellum to be built by exaptation, the five following conditions would all have to be met:

C1: Availability. Among the parts available for recruitment to form the flagellum, there would need to be ones capable of performing the highly specialized tasks of paddle, rotor, and motor, even though all of these items serve some other function or no function.

C2: Synchronization. The availability of these parts would have to be synchronized so that at some point, either individually or in combination, they are all available at the same time.

C3: Localization. The selected parts must all be made available at the same ‘construction site,’ perhaps not simultaneously but certainly at the time they are needed.

C4: Coordination. The parts must be coordinated in just the right way: even if all of the parts of a flagellum are available at the right time, it is clear that the majority of ways of assembling them will be non-functional or irrelevant.

C5: Interface compatibility. The parts must be mutually compatible, that is, ‘well-matched’ and capable of properly ‘interacting’: even if a paddle, rotor, and motor are put together in the right order, they also need to interface correctly.

( Agents Under Fire: Materialism and the Rationality of Science, pgs. 104-105 (Rowman & Littlefield, 2004). HT: ENV.)

8 –> The number of biologically relevant cases where C1 – 5 has been observed: ZERO.

9 –> What is coming out ever more clearly is this:

when a set of matching components must be arranged so they can work together to carry out a task or function, this strongly constrains both the choice of individual parts and how they must be arranged to fit together

A jigsaw puzzle is a good case in point.

So is a car engine — as anyone who has had to hunt down a specific, hard to find part will know.

So are the statements in a computer program — there was once a NASA rocket that veered off course on launch and had to be destroyed by triggering the self-destruct because of — I think it was — a misplaced comma.

The letters and words in this paragraph are like that too.

That’s why (at first, simple level) we can usually quite easily tell the difference between:

A: An orderly, periodic, meaninglessly repetitive sequence: FFFFFFFFFF . . .

B: Aperiodic, evidently random, equally meaningless text: y8ivgdfdihgdftrs . . .

C: Aperiodic, but recognisably meaningfully organised sequences of characters: such as this sequence of letters . . .

In short, to be meaningful or functional, a correct set of core components have to match and must be properly arranged, and while there may be some room to vary, it is not true that just any part popped in in any number of ways can fit in.

As a direct result, in our general experience, and observation, if the functional result is complex enough, the most likely cause is intelligent choice, or design.  

This has a consequence. For, this need for choosing and correctly arranging then hooking up correct, matching parts in a specific pattern implicitly rules out the vast majority of possibilities and leads to the concept of islands of function in a vast sea of possible but meaningless and/or non-functional configurations.

10 –> Consequently, the normal expectation is that complex, multi-part functionality will come in isolated islands. So also, those who wish to assert an “exception” for biological functions like the avian flow-through lung, will need to  empirically warrant their claims. Show us, in short.

11 –> And, to do so will require addressing the difficulty posed by Gould in his last book, in 2002:

. . . long term stasis following geologically abrupt origin of most fossil morphospecies, has always been recognized by professional paleontologists. [The Structure of Evolutionary Theory (2002), p. 752.]

. . . .  The great majority of species do not show any appreciable evolutionary change at all. These species appear in the section [[first occurrence] without obvious ancestors in the underlying beds, are stable once established and disappear higher up without leaving any descendants.” [p. 753.]

. . . . proclamations for the supposed ‘truth’ of gradualism – asserted against every working paleontologist’s knowledge of its rarity – emerged largely from such a restriction of attention to exceedingly rare cases under the false belief that they alone provided a record of evolution at all! The falsification of most ‘textbook classics’ upon restudy only accentuates the fallacy of the ‘case study’ method and its root in prior expectation rather than objective reading of the fossil record. [[p. 773.]

12 –> In that context, the point raised by GP above, that

. . .  once a gene is duplicated and inactivated, it becomes non visible to NS. So, intelligent causes can very well act on it without any problem, while pure randomness, mutations and drift, will be free to operate in neutral form, but will still have the whole wall of probabilistic barriers against them.

. . . takes on multiplied force.

___________

In short, the islands of function issue — rhetorical brush-asides notwithstanding — is real, and it counts.  Let us see how the evolutionary materialism advocates will answer to it.

7 –> So, what is the grand overturn that shows that this is all nonsense? The concept of rising fitness functions that allow incremental change:

Now suppose that it rains for 40 days and 40 nights. The rain fills up our landscape, forming a vast sea.  Only the mountain tops remain above the water as islands – the ‘islands of function’ that IDers are so fond of.

Our populations occupy the islands.  Sea level indicates the minimum fitness at which mutants remain viable. Small changes will create viable descendants at different spots on the island, though the population as a whole will gravitate toward the high spots. Larger changes will put the mutants underwater, where they will die out.

The idea, according to ID proponents, is that populations remain stranded on these islands of function.  Some amount of microevolutionary change is possible, but only if it leaves you high and dry on the same island.  Macroevolution is not possible, because that would require leaping from island to island, and evolution is incapable of such grand leaps.  You’ll end up in the water.

There is some truth to the ‘islands of function’ metaphor, but it also has some glaring shortcomings that ID proponents almost always overlook.  I will mention some of the strengths and  shortcomings in the comments, and I know that my fellow commenters will point out others.

8 –> To which the obvious answer is that the requisites of complex, specific, integrated function define islands which are isolated by seas of non-function that need to be bridged, not just on paper but observationally AND WITHIN ACCESSIBLE SEARCH RESOURCES (where the atomic resources of our solar system make the BLIND SEARCH creation of 500 bits of novel FSCO/I maximally implausible, and  those of the observed cosmos max out at 1,000 bits.

9 –> In particular, the warrant for bridging islands of function requires that such a claim be justified observationally. Starting, with the origin of the very first body plan, and continuing with the origin of further body plans, requiring credibly 100 – 1,000 k bits of genetic information in the first case (in a string data structure) and of order 10 – 100 millions in onward cases for multicellular body plans.

10 –> Hardly less fatal, is something implied in what I just outlined. We are not dealing with known, close-by islands that were mountain-tops flooded out, but an unknown and patently vast seascape that may or may not contain islands of function, so far as the blind chance and mechanical necessity are concerned that are the only means of search permissible under the relevant conditions.  And, where there are strictly limited search resources that max out at 500 – 1,000 bits under very generous conditions.

11 –> So, what is really needed is to start with a warm little pond or the like scenario and get a suitable concentration of monomers and then collect a living cell, per reasonable observation. One that has encapsulation, with gating of materials flows, and carries out self-replication and metabolism using a genetic code mechanism and the like. then, we need to see observational warrant for going from that to novel body plans with specialised properly organised cell types, tissues, organs, systems etc, constituting a new organism. This simply has not been done, nor is such in prospect.

12 –> Absent that, what we have is gross extrapolation of micro changes in already existing body plans, substituted for what was really needed. That is, we need to explain crossing the sea of non-function by blind chance mechanisms that would put us on shorelines of function, not the incremental hill climbing that can happen by all accounts once we are on such a shoreline. And, this must start, logically, with the very first body plan.

13 –> At the same time, we are surrounded by a world of technology that tells us that intelligent designers exist and are fully capable of creating FSCO/I rich systems. And we have whole disciplines and professions that study and practice the design of FSCO/I rich systems.

_________

So, which alternative explanation is more reasonable on the actual evidence, why? END

Comments
JWT, as in at 1 and 12 above, yup. [I just added, 5 above.] I have also pointed out just above that by his evident lack of knowledge base, AF is not in a position to make the confident manner dismissals on FSCO/I he has been making. And BTW, every post he has ever made at UD is a successful test of same as produced, reliably, by design. On testing cases of FSCI in biology, he should note that there is now a practice of genetic engineering, and that text has been written into DNA, so intelligently designed genetic information is a known fact. Beyond, at origin of life and of organism types, we were not there, so we are forced to infer on best explanation. AF knows or should know that there is a world of technology surrounding [in addition to the cases he himself produces in this and other threads], that allows us to be highly confident that FSCO/I is a reliable sign of design; that being the only credible causal factor with demonstrated and reliably tested capacity to produce FSCO/I. This, being backed up by various exercises that so far have manged to get 20 - 24 ASCII characters worth of info, where that is ~ 1 in 10^50 of a config space, we need a mechanism capable within the resources of our cosmos or solar system of working effectively in spaces 10^100 beyond that, 1 in 10^150. (That is the 500 bit end of the 500 - 1,000 bit scale config spaces we are talking about.) And, that his demand for in effect getting a time machine to visit the remote past as a criterion of accepting the strength of the sign, would if consistently applied wipe out all origins sciences, i.e this is selective hyperskepticism. I have suggested as a start -- after 8 years of observing the ID debates by his own admission -- that he look at my 101 on basic informatics. KFkairosfocus
March 17, 2013
March
03
Mar
17
17
2013
04:15 AM
4
04
15
AM
PST
AF: I notice, again, want of substance. Earlier, you did not seem to know what a configuration space is [and the broader phase/state space], and the context of such in the history of both mathematics and physics, with particular reference to statistical thermodynamics. If you do not know these you are unlikely to know of the informational school of thought on statistical thermodynamics and the force of the point that the entropy measures the want of information on specific microstate held by the components of a system, of which what we know is those gross aggregates, the macroscopic state determining variables. As a result, you are in no position to properly directly evaluate the link from classical to statistical to informational thermodynamics, and the linked issues on information raised by Elzinga. However, you are in a position to understand the title of the paper by J S Wicken in 1979, and to therefore understand that it is simply and blatantly false that seeing a bridge from thermodynamics to information issues in a context of the molecular machines and organisation of life forms, is a "proof" of being a creationist or the like. (Orgel, Yockey and others have pursued similar issues.) Onward, it is fair to observe that your dismissiveness to the FSCO/I metric is utterly unwarranted. You need to acquaint yourself with the bit as a direct unit of information, and its root in the discussion of strings of symbols and possible vs observed states with linked considerations on probability distribution of symbols in messages. (This, from my always linked note, may be a useful 101; certainly, it is linked to what I used to teach my students on this subject, with some fair degree of success.) Lastly, I do not control UD's moderation and control policies, so I cannot say much other than that whoever Thaumaturge was, he had little or nothing of substance to contribute here. Had he been serious about substance, he could very easily have engaged this thread across yesterday, as he can still do from his preferred zone. (Not, that the track record says anything much, from what has been going on in the thread at TSZ all along. And, if he has indeed been banned, I am sure Mr Arrington can speak for himself as to why. I did note a warning that T was making no redeeming contribution to the blog by his ipse dixit sniping remarks.) KFkairosfocus
March 17, 2013
March
03
Mar
17
17
2013
04:11 AM
4
04
11
AM
PST
@AF
that the author of the “Skeptical Zone” thread has just been banned by Barry
That's a good thing, though. Now be can claim having street cred for being a victim of the ID "inquisition". Hey, AF, kindly look at all your posts you have written here. Replies may be edited INTO your postings by the commander in chief of this thread.JWTruthInLove
March 17, 2013
March
03
Mar
17
17
2013
02:55 AM
2
02
55
AM
PST
Meant to say that I am referring to Keith S, who was posting under the "pseudonym "thaumaturge"?Alan Fox
March 17, 2013
March
03
Mar
17
17
2013
01:32 AM
1
01
32
AM
PST
Interesting that the author of the "Skeptical Zone" thread has just been banned by Barry, or he might have been able to take you up on some of your assertions. You are of course, as is anyone else here, welcome to engage in the TSZ thread directly, rather than relying on the "B" team for a dilatory bit of finger poking.Alan Fox
March 17, 2013
March
03
Mar
17
17
2013
01:24 AM
1
01
24
AM
PST
PPS: It seems that there is an almost endless pile of deep rooted, stubbornly clung to misconceptions to be corrected. As for the "ID is a default" talking point, it should be noted -- as was pointed out ever so many times to EL and others but has been consistently ignored or evaded without good reason -- that there are two defaults in succession in the per aspect design inference explanatory filter: (i) mechanical necessity giving rise to natural regularity of low contingency, and in the case of high contingency, (ii) chance leading to stochastically distributed outcomes. The inference to design is precisely made in light of observing that, from Plato's day to now, we have consistently observed three main causal factors: necessity, chance and the ART-ificial, aka design. So, it is inductively well warranted to explain on such. What design theory then seeks, is to assess the characteristic features of the three and to assign causal explanations appropriately, asking and seeking to answer whether there is in our world a pattern that reliably indicates design as cause. To which the answer -- backed up by literally billions of cases -- is yes, Wicken's functional organisation wiring diagram in a context where there is sufficient complexity joined to specificity that makes chance not a plausible explanation of the observed entity. AF wishes to dodge that overwhelming body of evidence, and to suggest that if we do not have good reason to infer to necessity or chance, the only reasonable conclusion is that we do not have any good explanation of the likely cause. In short, he has no good alternative, but does not -- obviously for ideological reasons -- want to face the direct weight of evidence that FSCO/I is a strong sign of design. And this goes to the point that he wishes to pretend that FSCO/I -- which starts with the functional organisation of the texts he posts here -- is a figment of our over-active, God of the gaps, imaginations. When challenged with the measured value of FSCO/I in his own posts, he simply refuses to read. And, when he is informed that design theory from the outset of the modern thinking, has consistently highlighted that from the world of life we may infer to design, but have no good basis as a scientific inference to infer to a specific designer, his ilk are quick to suggest hidden agenda Creatinist conspiracism. Sadly revealing.kairosfocus
March 16, 2013
March
03
Mar
16
16
2013
11:41 PM
11
11
41
PM
PST
PS: On abusive God of the gaps dismissals such as was resorted to by AF above, cf. 39 in the UD WAC's (NB: BA et al, the link has become corrupt):
39] ID is Nothing More Than a “God of the Gaps” Hypothesis Famously, when his calculations did not quite work, Newton proposed that God or angels nudged the orbiting planets every now and then to get them back into proper alignment. Later scientists were able to show that the perturbations of one planet acting on another are calculable and do not in aggregate skew the calculations. Newton’s error is an example of the “God of the gaps” fallacy – if we do not understand it, God must have done it. ID is not proposing “God” to paper over a gap in current scientific explanation. Instead ID theorists start from empirically observed, reliable, known facts and generally accepted principles of scientific reasoning: (a) Intelligent designers exist and act in the world. (b) When they do so, as a rule, they leave reliable signs of such intelligent action behind. (c) Indeed, for many of the signs in question such as CSI and IC, intelligent agents are the only observed cause of such effects, and chance + necessity (the alternative) is not a plausible source, because the islands of function are far too sparse in the space of possible relevant configurations. (d) On the general principle of science, that “like causes like,” we are therefore entitled to infer from sign to the signified: intelligent action. (e) This conclusion is, of course, subject to falsification if it can be shown that undirected chance + mechanical forces do give rise to CSI or IC. Thus, ID is falsifiable in principle but well supported in fact. In sum, ID is indeed a legitimate scientific endeavor: the science that studies signs of intelligence.
kairosfocus
March 16, 2013
March
03
Mar
16
16
2013
11:27 PM
11
11
27
PM
PST
F/N: Elzinga adds to the "Ipse Dixit"-ism mix over at TSZ, Dec 20th, in the same thread:
ID/creationists believe that evolution violates the second law of thermodynamics. It is the fundamental misconception that underlies all ID/creationist denial and all ID/creationist “theory.” There is not one ID/creationist that can pass a basic concept test on entropy and the second law [--> Notice the strawman tactic stereotyping and guilt by association tactic, where Design thought and Creationism are equated with all sorts of onward conspiracy mongering being alluded to; cf UD WAC here on in reply . . . ]. And it is this fundamental misconception that genetically links the ID crowd directly to the “scientific” creationist crowd. It is a clear genetic marker that they just can’t hide. [--> Actually, as will be shown below, this just shows that ME has not done basic homework, as in kindly tell us the title of the paper of 1979 by that notorious Creation Scientist -- NOT -- J S Wicken in which he stated, "organized systems must be assembled element by element according to an [[originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. [--> this is the source of the descriptive abbreviations, FSCI and FSCO/I . . . . ] It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’"; cf. below.] It is a marker that is even more robust that cdesign proponentsists.
This aptly further illustrates the pattern of projection, ad hominem laced dismissive declarations and failure to address serious substantial concerns on the merits. As a first note, the allusion to the Discovery Institute late 90's promotional argument by Johnson on how sound scientific research multiplied by addressing the sociocultural agendas of radical evolutionary materialism dressed up in the lab coat, is illustrative of how a tendentious and materially misleading, irresponsible fear-mongering talking point narrative is used by objectors to design theory to poison the atmosphere and cloud issues through false and/or misleading assertions endlessly repeated drumbeat style as though such urban legends were well substantiated fact. This tells us a lot about the level of thinking we are dealing with. However, there is a technical issue on the table, thermodynamics: 1 --> Evidently, Elzinga has first of all not troubled to seriously read the very first technical design theory book, TMLO by Thaxton (a PhD Chemist . . . ) et al, chs 7 and 8. Let me clip the transitional comment at the end of Ch 7, after a discussion that largely focussed on classical thermodynamics and Gibbs Free energy in particular:
While the maintenance of living systems is easily rationalized in terms of thermodynamics, the origin of such living systems is quite another matter. Though the earth is open to energy flow from the sun, the means of converting this energy into the necessary work to build up living systems from simple precursors remains at present unspecified (see equation 7-17). The "evolution" from biomonomers of to fully functioning cells is the issue. Can one make the incredible jump in energy and organization from raw material and raw energy, apart from some means of directing the energy flow through the system? In Chapters 8 and 9 we will consider this question, limiting our discussion to two small but crucial steps in the proposed evolutionary scheme namely, the formation of protein and DNA from their precursors. It is widely agreed that both protein and DNA are essential for living systems and indispensable components of every living cell today.11 Yet they are only produced by living cells. Both types of molecules are much more energy and information rich than the biomonomers from which they form. Can one reasonably predict their occurrence given the necessary biomonomers and an energy source? Has this been verified experimentally? These questions will be considered . . .
2 --> This comes after a responsible treatment of the thermodynamics involved, and in effect highlights the pivotal issue, which I took up first in my own discussion on thermodynamics considerations in Appendix 1 my always linked note (click on my handle in the LH column . . . this has been present for every post I have ever made as a comment at UD). 3 --> Which, effectively starts from a 101 on thermodynamics, including deducing what "raw energy" implies and also the effect of such an injection in Clausius' example used to deduce the second law. Namely, as the subtraction on transfer of d'Q of heat from a body at T_hot to another at T_cold shows, the RISE in entropy of the importing body (using the ratio d'Q/T) will overwhelm the loss from the exporting one, leading to an overall increase), i.e. right from the beginning, it was well understood that importation of raw energy tends to increase entropy. 4 --> Further to this, in Section A of the same discussion, I explained what information is, and identified functionally specific, complex information [FSCI] as a pivotal concept in design thought. In so doing, I pointed out the rise and increasing acceptance of the informational perspective on thermodynamics and particularly entropy, using Harry Robertson's Statistical Thermophysics as a pivot. In so doing, I pointed out the following understanding of what entropy means, from Wiki's article on Entropy and Information (I clip the current rendering):
At an everyday practical level the links between information entropy and thermodynamic entropy are not evident. Physicists and chemists are apt to be more interested in changes in entropy as a system spontaneously evolves away from its initial conditions, in accordance with the second law of thermodynamics, rather than an unchanging probability distribution. And, as the minuteness of Boltzmann's constant kB indicates, the changes in S/kB for even tiny amounts of substances in chemical and physical processes represent amounts of entropy which are extremely large compared to anything seen in data compression or signal processing. Furthermore, in classical thermodynamics the entropy is defined in terms of macroscopic measurements and makes no reference to any probability distribution, which is central to the definition of information entropy. At a multidisciplinary level, however, connections can be made between thermodynamic and informational entropy, although it took many years in the development of the theories of statistical mechanics and information theory to make the relationship fully apparent. In fact, in the view of Jaynes (1957), thermodynamic entropy, as explained by statistical mechanics, should be seen as an application of Shannon's information theory: the thermodynamic entropy is interpreted as being proportional to the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a description solely in terms of the macroscopic variables of classical thermodynamics, with the constant of proportionality being just the Boltzmann constant. For example, adding heat to a system increases its thermodynamic entropy because it increases the number of possible microscopic states for the system, thus making any complete state description longer. (See article: maximum entropy thermodynamics). Maxwell's demon can (hypothetically) reduce the thermodynamic entropy of a system by using information about the states of individual molecules; but, as Landauer (from 1961) and co-workers have shown, to function the demon himself must increase thermodynamic entropy in the process, by at least the amount of Shannon information he proposes to first acquire and store; and so the total thermodynamic entropy does not decrease (which resolves the paradox). Landauer's principle has implications on the amount of heat a computer must dissipate to process a given amount of information, though modern computers are nowhere near the efficiency limit.
5 --> As my Appendix 1 discusses, ever since Brillouin in the 50's - 60's, that link has been spotted and discussed, at some level. Indeed, that famous physicist introduced the idea that information is negentropy. The informational Thermodynamics view traces to Jaynes, of the same general era. 6 --> We can now pick up the concept of work, viewed as ordered application of force that displaces the object at its point of application along its line of action, quantified on the increment F*dx. 7 --> Clearly, a pattern of such ordered displacements can create complex, functional organisation, whether at macro or micro levels. Thence, my discussion in the same Appendix 1, of a vat full of micro-jet parts sufficiently small to take part in Brownian motion, and the challenge of countering diffusion through blind chance and mechanical necessity vs applying as a thought exercise, an army of intelligently directed nano-machines that force the assembly of the Jet based on the designed pattern of nodes and arcs. 8 --> It is easy to see that for an object of complexity such that the nodes and arcs Wicken wiring diagram patterns [cf his 1979 remark on that topic, which is the actual root of the descriptive terms and initials FSCI and FSCO/I . . . . ] to create a flyable jet involve at least 500 - 1,000 structured yes/no questions [= 500 - 1,000 bits], the atomic resources of our solar system or of the observed cosmos as a whole will be blatantly inadequate for us to expect that such a structure would spontaneously emerge through blind forces. But, FSCO/I rich entities are routinely created by planned, deliberate work. In short, counterflow leading to FSCO/I is a strong sign of deliberate work according to an organising plan. 9 --> In the UD ID Foundations series, this general topic came up at no 2 in the main series, Jan 23, 2011, here on. Notice, this from Dembski, on the point:
. . .[From commonplace experience and observation, we may see that:] (1) A designer conceives a purpose. (2) To accomplish that purpose, the designer forms a plan. (3) To execute the plan, the designer specifies building materials and assembly instructions. (4) Finally, the designer or some surrogate applies the assembly instructions to the building materials. (No Free Lunch, p. xi. HT: ENV.)
10 --> This is in fact on billions of test cases, the only actually observed source of FSCO/I [take this as meaning complex, information-rich, functional organisation that is dependent on specific arrangement and coupling of parts, to achieve function in some relevant way, e.g. a car engine or even a string of ASCII symbols expressing a blog post in English], and it is reasonable then to infer on billions of test cases and the analysis outlined above that such FSCO/I is a sign of design. 11 --> That is the context in which I argued in that post:
As fig. A shows, open systems can indeed readily — but, alas, temporarily — increase local organisation by importing energy from a “source” and doing the right kind of work. But, generally only in a context of guiding information based on an intent or program, or its own functional organisation, and at the expense of exhausting compensating disorder to some “sink” or other. (NB: here, something like a timing belt and set of cams is a program.) 4 –> Heat –in short: energy moving between bodies due to temperature difference, by radiation, convection or conduction – cannot wholly be converted to work. (Here, the radiant energy flowing out from our sun’s surface at some 6,000 degrees Celsius to earth at some 15 degrees Celsius, average, is a form of heat.) 5 –> Physically, by definition: work is done when applied forces impart motion along their lines of action to their points of application, e.g. when we lift a heavy box to put it on a shelf, we do work. For force F, and distance along line of motion dx, the work is: dW = F*dx, . . . where, strictly * denotes a “dot product” 6 –> But, that definition does not say anything about whether or not the work is constructive — a tornado ripping off a roof and flying its parts for a mile to land elsewhere has done physical work, but not constructive work. (Side-bar, constructive work is closely connected to the sort we get paid for: if your work is constructive, desirable and affordable, you get paid for it. [Hence, the connexion between energy use at a given general level of technology and the level of economic activity and national income.]) 7 –> Similarly, it says nothing about the origin of the energy conversion device. 8 –> When that device itself manifests functionally specific, complex organisation and associated information — FSCO/I (e.g. a gas engine- generator set or a solar PV panel, battery and wind turbine set, as opposed to, e.g. the natural law-dominated order exhibited by tornadoes or hurricanes as vortexes), we have good reason to infer that the conversion device was designed. (Side-bar: Now, there is arguably a link between increased information and reduction in degrees of microscopic freedom of distributing energy and mass. Where, entropy is best understood as a logarithmic measure of the number of ways energy and mass can be distributed under a given set of macro-level constraints like pressure, temperature, magnetic field, etc.: s = k ln w, k being Boltzmann’s constant and w the number of “ways.” Jaynes therefore observed, aptly [but somewhat controversially]: “The entropy of a thermodynamic system is a measure of the degree of ignorance of a person whose sole knowledge about its microstate consists of the values of the macroscopic quantities . . . which define its [macro-level observable] thermodynamic state. This is a perfectly ‘objective’ quantity . . . There is no reason why it cannot be measured in the laboratory.”[Cited, Harry Robertson, Statistical Thermophysics, Prentice Hall, 1993, p. 36.] This connects fairly directly to the information as negentropy concept of Brillouin and Szilard, but that is not our focus here, which is instead on the credible source/cause of energy conversion devices exhibiting FSCO/I. As this thought experiment shows [cf.TMLO chs 8 & 9], the correct assembly of such from microscopic components scattered at random in a vat or a pond would indeed drastically reduce entropy and increase the functionality [which would define an observable functional state], but the basic message is that since the scattered microstates so overwhelm the clumped then the functional ones, it is maximally unlikely that such would ever happen spontaneously. Nor, would heating up the pond or striking it with lightning or the like be likely to help matters out. Just as, we normally observe an ink spot dropped in a vat diffusing throughout the vat, not collecting back together again. In short, to produce complex, specific organisation to achieve function, the most credible path is to assemble co-ordinated, well-matched parts according to a known good plan.) 9 –> The reasonableness of the inference from observing a high-FSCO/I energy converter to its having been designed would be sharply multiplied when the device in question is part of a von Neuman, self-replicating automaton [vNSR]:
12 --> vNSR? yes, the living cell is a self-replicating metabolic automaton that therefore involves the following components in addition to the general organised and highly complex, specific functions it carries out:
10 –> Here, we see a machine that not only functions in its own behalf but has the ADDITIONAL — that is very important — capacity of self replication based on stored specifications, which requires: (i) an underlying storable code to record the required information to create not only (a) the primary functional machine [here, for a "clanking replicator" as illustrated, a Turing-type “universal computer”; in a cell this would be the metabolic entity that transforms environmental materials into required components etc.] but also (b) the self-replicating facility; and, that (c) can express step by step finite procedures for using the facility; (ii) a coded blueprint/tape record of such specifications and (explicit or implicit) instructions, together with (iii) a tape reader that reads and interprets the coded specifications and associated instructions; thus controlling: (iv) position-arm implementing machines with “tool tips” controlled by the tape reader and used to carry out the action-steps for the specified replication; backed up by (v) either: (1) a pre-existing reservoir of required parts and energy sources, or (2) associated “metabolic” machines carrying out activities that as a part of their function, can provide required specific materials/parts and forms of energy for the replication facility, by using the generic resources in the surrounding environment. 11 –> Also, parts (ii), (iii) and (iv) are each necessary for and together are jointly sufficient to implement a self-replicating machine with an integral von Neumann universal constructor. 12 –> That is, we see here an irreducibly complex set of core components that must all be present in a properly organised fashion for a successful self-replicating machine to exist. [Take just one core part out, and self-replicating functionality ceases: the self-replicating machine is irreducibly complex (IC).] 13 –> This irreducible complexity is compounded by the requirement (i) for codes, requiring organised symbols and rules to specify both steps to take and formats for storing information, and (v) for appropriate material resources and energy sources. 14 –> Immediately, we are looking at islands of organised function for both the machinery and the information in the wider sea of possible (but mostly non-functional) configurations. 15 –> In short, outside such functionally specific — thus, isolated — information-rich hot (or, “target”) zones, want of correct components and/or of proper organisation and/or co-ordination will block function from emerging or being sustained across time from generation to generation. 16 –> So, we may conclude: once the set of possible configurations of relevant parts is large enough and the islands of function are credibly sufficiently specific/isolated, it is unreasonable to expect such function to arise from chance, or from chance circumstances driving blind natural forces under the known laws of nature.
13 --> Notice, how the vNSR involved in the living cell directly implicates coded, digital information, which -- as is notorious since the following letter from Crick to his son on March 19, 1953, has long been understood to be involved in how DNA functions:
"Now we believe that the DNA is a code. That is, the order of bases (the letters) makes one gene different from another gene (just as one page of print is different from another)"
14 --> This is of course a main reason why OOL is a pivotal context in which design theory has arisen in the context of the world of life, starting from TMLO. (That is not to say that the origin of increments in FSCO/I in making major body plans is not a significant consideration.) But at OOL, which is the ROOT of the Darwinian tree of life -- and no roots, no shoots, branches or twigs is the obvious issue -- the favourite "out" of appealing to differential reproductive success of sub-populations in ecological niches [i.e. natural selection in one form or another] is off the table. For, the origin of the vNSR required for that is most centrally on the table. 15 --> In this light, let us listen to Wicken:
Organized’ systems are to be carefully distinguished from ‘ordered’ systems. Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [[i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [[originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. [--> Notice the roots of the term FSCO/I] It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [[“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65. (Emphases and notes added. Nb: “originally” is added to highlight that for self-replicating systems, the blue print can be built-in.)]
16 --> Now, strike out "selection" and ask, what is left on the table as a credible explanation for FSCO/I? 17 --> Also, just what areas of thought are being integrated here by that notorious Creationist -- NOT -- J S Wicken, again? Let us read the title:
The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion
18 --> But, but, but, we thought this linkage was a sure signature of those notorious and dangerous Creationists in action. It seems, not. OOPS. 20 --> And, it is clear from the above issues, that design sits at the table of scientific explanations as of right, not by grudging sufferance. Starting, here, with the origin of life. And if this is there at the root, there is no good reason to impose a priori materialism disguised as mere methodological constraints and "reasonable" redefinitions of science and its methods -- scientific explanations MUST be naturalistic -- it sits as of right as a credible explanation all the way from microbes to Mozart. ____________ "Ipse Dixit"-ism as a rebuttal to design theory fails, yet again. KFkairosfocus
March 16, 2013
March
03
Mar
16
16
2013
10:59 PM
10
10
59
PM
PST
Right Alan, it's all nonsensical. Design, purpose, function, configuration, specification, information, agency, consciousness, complexity, sophistication, and much much more, are all easily dismissed and waved away. You're well within your rights to scoff endlessly about every challenging subject, and consistently refuse to stake out any claims beyond reductive absurdity. Functional isolation? Ridiculous! Design? Illusory! Complexity? Ignorant! Specification? Bafflegab! Evolution by unguided processes? Self-evidently true! Yes, I've practically summed you up in a single comment. :lol: Oh, and let's not forget the self-explanatory floating link: Clown nosesChance Ratcliff
March 16, 2013
March
03
Mar
16
16
2013
04:45 PM
4
04
45
PM
PST
lifepsy, as well you should be skeptical,,, it is simply preposterous to think that such unfathomable functional complexity arose from anything other than blind undirected processes. :) Thank goodness Fox and company are here to keep us 'scientific',,, Nothing To See Here - video (if there is any question, Fox is the one handling crowd control) http://www.youtube.com/watch?v=rSjK2Oqrgicbornagain77
March 16, 2013
March
03
Mar
16
16
2013
04:31 PM
4
04
31
PM
PST
...that’s a superb refutation of functional isolation...
What's to refute? It's gobbledegook. And would still only be a "God of the gaps" argument. Default to "design". AF: Kindly do your homework next time, before resorting to more dismissive "Ipse Dixit"-ism as a substitute for substantial reply on the merits. KFAlan Fox
March 16, 2013
March
03
Mar
16
16
2013
04:20 PM
4
04
20
PM
PST
Oops RatcliffAlan Fox
March 16, 2013
March
03
Mar
16
16
2013
04:17 PM
4
04
17
PM
PST
Thanks for pointing that out, Radcliffe ReificationAlan Fox
March 16, 2013
March
03
Mar
16
16
2013
04:16 PM
4
04
16
PM
PST
BA77, I'm not convinced. I mean... besides the reams of rational evidence-based argumentation, and actual peer-reviewed experimental empirical demonstrations of limits to neo-darwinian mechanisms, what "mysterious barrier" is really preventing a fish+400 million years from turning into a human?lifepsy
March 16, 2013
March
03
Mar
16
16
2013
04:13 PM
4
04
13
PM
PST
Alan @1, that's a superb refutation of functional isolation. You should copy and paste that into a YouTube comment, that is if you didn't already copy and paste it from one. But fix your hyperlink first, or your sloppy commenting might be confused for a lousy argument.Chance Ratcliff
March 16, 2013
March
03
Mar
16
16
2013
03:59 PM
3
03
59
PM
PST
further notes:
Mammalian overlapping genes: the comparative perspective. - 2004 Excerpt: it is rather surprising that a large number of genes overlap in the mammalian genomes. Thousands of overlapping genes were recently identified in the human and mouse genomes. However, the origin and evolution of overlapping genes are still unknown. We identified 1316 pairs of overlapping genes in humans and mice and studied their evolutionary patterns. It appears that these genes do not demonstrate greater than usual conservation. Studies of the gene structure and overlap pattern showed that only a small fraction of analyzed genes preserved exactly the same pattern in both organisms. http://www.ncbi.nlm.nih.gov/pubmed/14762064 Doubling the information from the Double Helix - April 27, 2012 Excerpt: The study’s findings have shown that two microRNA genes with different functions can be produced from the same piece (sequence) of DNA — one is produced from the top strand and another from the bottom complementary ‘mirror’ strand. Specifically, the research has shown that a single piece of human DNA gives rise to two fully processed microRNA genes that are expressed in the brain and have different and previously unknown functions. One microRNA is expressed in the parts of nerve cells that are known to control memory function and the other microRNA controls the processes that move protein cargos around nerve cells.,,, Helen Scott and Joanna Howarth, the lead authors on the study, added: “We have now found that both sides of the double helix can each produce a microRNA. These two microRNAs are almost a perfect mirror of each other, but due to slight differences in their sequence, they regulate different sets of protein producing RNAs, which will in turn affect different biological functions. Such mirror-miRNAs are likely to represent a new group of microRNAs with complex roles in coordinating gene expression, doubling the capacity of regulation.” http://phys.org/news/2012-04-helix.html DNA Caught Rock 'N Rollin': On Rare Occasions DNA Dances Itself Into a Different Shape - January 2011 Excerpt: Because critical interactions between DNA and proteins are thought to be directed by both the sequence of bases and the flexing of the molecule, these excited states represent a whole new level of information contained in the genetic code, http://www.sciencedaily.com/releases/2011/01/110128104244.htm Multidimensional Genome – Dr. Robert Carter – 10 minute video http://www.metacafe.com/watch/8905048/ The next evolutionary synthesis: Jonathan BL Bard Excerpt: We now know that there are at least 50 possible functions that DNA sequences can fulfill [8], that the networks for traits require many proteins and that they allow for considerable redundancy [9]. The reality is that the evolutionary synthesis says nothing about any of this; for all its claim of being grounded in DNA and mutation, it is actually a theory based on phenotypic traits. This is not to say that the evolutionary synthesis is wrong, but that it is inadequate – it is really only half a theory! http://www.biosignaling.com/content/pdf/1478-811X-9-30.pdf Systems biology: Untangling the protein web - July 2009 Excerpt: Vidal thinks that technological improvements — especially in nanotechnology, to generate more data, and microscopy, to explore interaction inside cells, along with increased computer power — are required to push systems biology forward. "Combine all this and you can start to think that maybe some of the information flow can be captured," he says. But when it comes to figuring out the best way to explore information flow in cells, Tyers jokes that it is like comparing different degrees of infinity. "The interesting point coming out of all these studies is how complex these systems are — the different feedback loops and how they cross-regulate each other and adapt to perturbations are only just becoming apparent," he says. "The simple pathway models are a gross oversimplification of what is actually happening." http://www.nature.com/nature/journal/v460/n7253/full/460415a.html
etc.. etc.. etc..bornagain77
March 16, 2013
March
03
Mar
16
16
2013
03:50 PM
3
03
50
PM
PST
further notes:
Circular RNAs: A Hidden, Parallel Universe - Cornelius Hunter PhD. - March 2, 2013 Excerpt: Recall that protein-coding genes, in addition to coding for an incredible protein machine, may also contain several more layers of information encoding signals for the transcript (mRNA) stability, mRNA editing, DNA copy error correction, the speed of translation, the protein’s three-dimensional protein structure, the stability of that structure, the multiple functions of the protein, interactions of the protein with other proteins, instructions for transport, avoiding an amyloid state, any other genes that overlap with the gene, and controlling tRNA selection which can help to respond to different environmental conditions. That is a tall order and now we have yet another layer of information for which genes much encode: circular RNA macromolecules which just happen to interact with microRNA and which just happen to be expressed at the right time, because if they are expressed at the wrong time you don’t have a normal brain. And amazingly, in protein-coding genes, circular RNA macromolecules may be encoded both in the antisense strand and in the sense strand. In fact numerous circular RNAs form by head-to-tail splicing of exons.,,, http://darwins-god.blogspot.com/2013/03/circular-rnas-hidden-parallel-universe.html "Complexity Brake" Defies Evolution - August 2012 Excerpt: Physicists can use statistics to describe a homogeneous system like an ideal gas, because one can assume all the member particles interact the same. Not so with life. When describing heterogeneous systems each with a myriad of possible interactions, the number of discrete interactions grows faster than exponentially. Koch showed how Bell's number (the number of ways a system can be partitioned) requires a comparable number of measurements to exhaustively describe a system. Even if human computational ability were to rise exponentially into the future (somewhat like Moore's law for computers), there is no hope for describing the human "interactome" -- the set of all interactions in life. "This is bad news. Consider a neuronal synapse -- the presynaptic terminal has an estimated 1000 distinct proteins. Fully analyzing their possible interactions would take about 2000 years. Or consider the task of fully characterizing the visual cortex of the mouse -- about 2 million neurons. Under the extreme assumption that the neurons in these systems can all interact with each other, analyzing the various combinations will take about 10 million years..., even though it is assumed that the underlying technology speeds up by an order of magnitude each year. " Even with shortcuts like averaging, "any possible technological advance is overwhelmed by the relentless growth of interactions among all components of the system," Koch said.,, Why can't we use the same principles that describe technological systems? Koch explained that in an airplane or computer, the parts are "purposefully built in such a manner to limit the interactions among the parts to a small number." The limited interactome of human-designed systems avoids the complexity brake. "None of this is true for nervous systems.",,, to read more go here: http://www.evolutionnews.org/2012/08/complexity_brak062961.html Unexpectedly small effects of mutations in bacteria bring new perspectives - November 2010 Excerpt:,,, using extremely sensitive growth measurements, doctoral candidate Peter Lind showed that most mutations reduced the rate of growth of bacteria by only 0.500 percent. No mutations completely disabled the function of the proteins, and very few had no impact at all. Even more surprising was the fact that mutations that do not change the protein sequence had negative effects similar to those of mutations that led to substitution of amino acids. A possible explanation is that most mutations may have their negative effect by altering mRNA structure, not proteins, as is commonly assumed. http://www.physorg.com/news/2010-11-unexpectedly-small-effects-mutations-bacteria.html The Majority of Animal Genes Are Required for Wild-Type Fitness. Cell. - Ramani, A. K. et al. 2012. - 148 (4): 792-802. Excerpt: Whereas previous studies typically assess phenotypes that are detectable by eye after a single generation, we monitored growth quantitatively over several generations. In contrast to previous estimates, we find that, in these multigeneration population assays, the majority of genes affect fitness, and this suggests that genetic networks are not robust to mutation. Our results demonstrate that, in a single environmental condition, most animal genes play essential roles. This is a higher proportion than for yeast genes, and we suggest that the source of negative selection is different in animals and in unicellular eukaryotes. http://www.icr.org/article/7166/ Epistasis between Beneficial Mutations - July 2011 Excerpt: We found that epistatic interactions between beneficial mutations were all antagonistic—the effects of the double mutations were less than the sums of the effects of their component single mutations. We found a number of cases of decompensatory interactions, an extreme form of antagonistic epistasis in which the second mutation is actually deleterious in the presence of the first. In the vast majority of cases, recombination uniting two beneficial mutations into the same genome would not be favored by selection, as the recombinant could not outcompete its constituent single mutations. https://uncommondescent.com/epigenetics/darwins-beneficial-mutations-do-not-benefit-each-other/ Mutations : when benefits level off - June 2011 - (Lenski's e-coli after 50,000 generations which approximately 1 million years of supposed human evolution) Excerpt: After having identified the first five beneficial mutations combined successively and spontaneously in the bacterial population, the scientists generated, from the ancestral bacterial strain, 32 mutant strains exhibiting all of the possible combinations of each of these five mutations. They then noted that the benefit linked to the simultaneous presence of five mutations was less than the sum of the individual benefits conferred by each mutation individually. http://www2.cnrs.fr/en/1867.htm?theme1=7 A Serious Problem for Darwinists: Epistasis Decreases Chances of Beneficial Mutations - November 8, 2012 Excerpt: A recent paper in Nature finds that epistasis (interactions between genetic changes) is much more pervasive than previously assumed. This strongly limits the ability of beneficial mutations to confer fitness on organisms. ,,, It takes an outsider to read this paper and see how disturbing it should be to the consensus neo-Darwinian theory. All that Darwin skeptics can do is continue to point to papers like this as severe challenges to the consensus view. Perhaps a few will listen and take it seriously. http://www.evolutionnews.org/2012/11/epistasis_decr066061.html
bornagain77
March 16, 2013
March
03
Mar
16
16
2013
03:49 PM
3
03
49
PM
PST
FSCO/I has been adequately described, quantified, exemplified, empirically tested as a reliable sign of design any number of times...
Adequately described? Adequately? On whose judgement? And where can this adequate description be found? Quantified? Are you seriously claiming you can quantify FSCO/I with regard to a biological example? Surely the way to confirm this assertion is to demonstrate that you can indeed do what you claim. Exemplified? Is it too much to ask for a cut and paste of or a link to this example. (let's hope it is a real biological example.) Empirically tested as a reliable sign of design? What on Earth would a sign of design be other than "it sure looks designed to me"? And how do your "empirical" tests work and how and on what were they conducted? Any number of times? So how hard can it be to tell us what this empirical testing consisted of? _________ AF, cf 15 ff below. Your questions above are answered -- have long been answered -- here on at IOSE, which as can be easily seen, I wrote. You should be well aware of this 101, given your 8 years of observing the ID debates. KFAlan Fox
March 16, 2013
March
03
Mar
16
16
2013
03:47 PM
3
03
47
PM
PST
lifepsy, if I may offer a bit more to,,,
What mysterious barrier do IDers think prevents microevolutionary change from accumulating until it becomes macroevolution?
Well that 'mysterious barrier' that 'prevents microevolutionary change from accumulating until it becomes macroevolution', once you understand it, is not really that mysterious of a barrier at all,,, Poly-Functional Complexity equals Poly-Constrained Complexity The primary problem that poly-functional complexity presents for neo-Darwinism, or even Theistic Evolutionists is this: To put it plainly, the finding of a severely poly-functional/poly-constrained genome by the ENCODE study, and further studies, has put the odds, of what was already astronomically impossible (of finding a functional protein in sequence space), to what can only be termed, for lack of better words, fantastically astronomically impossible,, i.e. instead of the infamous “Methinks it is like a weasel” single element of functional information that Darwinists pretend they are facing in any evolutionary search for a functional sequence, where single letters can be changed without effecting anything but the particular sequence,, we would actually be encountering something more akin to this illustration found on page 141 of the book Genetic Entropy by Dr. Sanford in our search for a functionally meaningful sequence in biology.
S A T O R A R E P O T E N E T O P E R A R O T A S http://en.wikipedia.org/wiki/Sator_Square
Which is translated ; THE SOWER NAMED AREPO HOLDS THE WORKING OF THE WHEELS. This ancient puzzle, which dates back to 79 AD, reads the same four different ways, Thus, If we change (mutate) any letter we may get a new meaning for a single reading read any one way, as in Dawkins' weasel program, but we will consistently destroy the other 3 readings of the message with the new mutation (save for the center). This is what is meant when it is said a poly-functional genome is poly-constrained to any random mutations and is thus a 'mysterious barrier' that 'prevents microevolutionary change from accumulating until it becomes macroevolution.' Notes:
The Extreme Complexity Of Genes – Dr. Raymond G. Bohlin - video http://www.metacafe.com/watch/8593991/ Astonishing DNA complexity update Excerpt: (ENCODE revealed) The untranslated regions (now called UTRs, rather than ‘junk’) are far more important than the translated regions (the genes), as measured by the number of DNA bases appearing in RNA transcripts. Genic regions are transcribed on average in five different overlapping and interleaved ways, while UTRs are transcribed on average in seven different overlapping and interleaved ways. Since there are about 33 times as many bases in UTRs than in genic regions, that makes the ‘junk’ about 50 times more active than the genes. http://creation.com/astonishing-dna-complexity-update Dual-Coding Genes in Mammalian Genomes - 2007 Excerpt: A textbook human gene encodes a protein using a single reading frame. Alternative splicing brings some variation to that picture, but the notion of a single reading frame remains. Although this is true for most of our genes, there are exceptions. Like viral counterparts, some eukaryotic genes produce structurally unrelated proteins from overlapping reading frames. The examples are spectacular (G-protein alpha subunit [Gnas1] or INK4a tumor suppressor), but scarce. The scarcity is anthropogenic in origin: we simply do not believe that dual-coding genes can occur in eukaryotes. To challenge this assumption, we performed the first genome-wide scan for mammalian genes containing alternative reading frames located out of frame relative to the annotated protein-coding region. Using a newly developed statistical framework, we identified 40 such genes. Because our approach is very conservative, this number is likely a significant underestimate, and future studies will identify more alternative reading frame–containing genes with fascinating biology. http://www.plosone.org/article/info:doi/10.1371/journal.pcbi.0030091 A genome-wide study of dual coding regions in human alternatively spliced genes. - 2006 Excerpt: Alternative splicing is a major mechanism for gene product regulation in many multicellular organisms. By using different exon combinations, some coding regions can encode amino acids in multiple reading frames in different transcripts. Here we performed a systematic search through a set of high-quality human transcripts and show that approximately 7% of alternatively spliced genes contain dual coding regions. http://www.ncbi.nlm.nih.gov/pubmed/16365380 Time to Redefine the Concept of a Gene? – Sept. 10, 2012 Excerpt: As detailed in my second post on alternative splicing, there is one human gene that codes for 576 different proteins, and there is one fruit fly gene that codes for 38,016 different proteins! While the fact that a single gene can code for so many proteins is truly astounding, we didn’t really know how prevalent alternative splicing is. Are there only a few genes that participate in it, or do most genes engage in it? The ENCODE data presented in reference 2 indicates that at least 75% of all genes participate in alternative splicing. They also indicate that the number of different proteins each gene makes varies significantly, with most genes producing somewhere between 2 and 25. Based on these results, it seems clear that the RNA transcripts are the real carriers of genetic information. This is why some members of the ENCODE team are arguing that an RNA transcript, not a gene, should be considered the fundamental unit of inheritance. http://networkedblogs.com/BYdo8 Scientists Map All Mammalian Gene Interactions – August 2010 Excerpt: Mammals, including humans, have roughly 20,000 different genes.,,, They found a network of more than 7 million interactions encompassing essentially every one of the genes in the mammalian genome. http://www.sciencedaily.com/releases/2010/08/100809142044.htm Insight into cells could lead to new approach to medicines - 2010 Excerpt: Scientists expected to find simple links between individual proteins but were surprised to find that proteins were inter-connected in a complex web. Dr Victor Neduva, of the University of Edinburgh, who took part in the study, said: "Our studies have revealed an intricate network of proteins within cells that is much more complex than we previously thought. http://www.physorg.com/news196402353.html The Complexity of Gene Expression, Protein Interaction, and Cell Differentiation - Jill Adams, Ph.D. - 2008 Excerpt: it seems that a single protein can have dozens, if not hundreds, of different interactions,,, In a commentary that accompanied Stumpf's article, Luis Nunes Amaral (2008) wrote, "These numbers provide a sobering view of where we stand in our cataloging of the human interactome. At present, we have identified less than 0.3% of all estimated interactions among human proteins. We are indeed at the dawn of systems biology." http://www.nature.com/scitable/topicpage/the-complexity-of-gene-expression-protein-interaction-34575
bornagain77
March 16, 2013
March
03
Mar
16
16
2013
03:33 PM
3
03
33
PM
PST
What mysterious barrier do IDers think prevents microevolutionary change from accumulating until it becomes macroevolution?
The barrier is the lack of a viable mechanism, obviously. Why can I jump 3 feet in the air but not 300? It's not rocket science. Neo-Darwinian proccesses have been demonstrated to be inadequate function producers even under the most favorable conditions. Simply holding to superstitious beliefs that "Time & Chance make all things possible" seems to be somewhat of a questionable scientific practice. And the word "microevolution" is very unfortunate as it implies macroevolution is possible and has actually occurred. That's probably the source of a great deal of confusion in the evolutionist camps.lifepsy
March 16, 2013
March
03
Mar
16
16
2013
12:23 PM
12
12
23
PM
PST
AF: As for an alleged fallacy of reification, the issue is that configuration spaces are cut down from phase or state spaces [by eliminating momentum], phase spaces being a pattern of modelling that has been used in Physics since Gibbs et al in C19, in the context of grounding statistical thermodynamics (and, yes, this is about thermodynamics that we are supposedly ignorant of . . . ). Abstract collections of configurations have long been a part of relevant analysis, and if you were serious, you could start with the space of possible arrangements in a string of 899 ASCII characters, which is where I did. KF PS: Let me add from Wiki:
In mathematics and physics, a phase space is a space in which all possible states of a system are represented, with each possible state of the system corresponding to one unique point in the phase space. For mechanical systems, the phase space usually consists of all possible values of position and momentum variables (i.e. the cotangent space of configuration space). The concept of phase space was developed in the late 19th century by Ludwig Boltzmann, Henri Poincaré, and Willard Gibbs.[1] . . . . In a phase space, every degree of freedom or parameter of the system is represented as an axis of a multidimensional space; a one-dimensional system is called a phase line, while a two-dimensional system is called a phase plane. For every possible state of the system, or allowed combination of values of the system's parameters, a point is plotted in the multidimensional space. [--> I tend to think in terms of an n-member vector for n degrees of freedom, e,g. the 899 member ASCII code string has 899 elements, each with 128 possible states; this can be extended WLOG to general systems that can be represented by a nodes and arcs wiring diagram, as such can be reduced to strings as is done with AutoCAD etc. The "spatial" picture emerges once we insert the concept of [extended] Hamming distance between points in the n-dimensional space rooted in digitwise differences in value . . . ] Often this succession of plotted points is analogous to the system's state evolving over time. In the end, the phase diagram represents all that the system can be, and its shape can easily elucidate qualities of the system that might not be obvious otherwise. A phase space may contain a great many dimensions. For instance, a gas containing many molecules may require a separate dimension for each particle's x, y and z positions and momenta as well as any number of other properties.
kairosfocus
March 16, 2013
March
03
Mar
16
16
2013
12:07 PM
12
12
07
PM
PST
And you wonder why people don't bother to read what you write! Islands of function; refication" ________ Ad hominem, not a substantial response. And, FYI, FSCO/I has been adequately described, quantified, exemplified, empirically tested as a reliable sign of design any number of times, just there seems to be a pretence on your part that by ignoring, strawmannising and making dismissive talking points, you can make reality conform to your ideology. I guess that speaks to where evolutionary materialism rooted radical relativism leads, and it is not pretty. KFAlan Fox
March 16, 2013
March
03
Mar
16
16
2013
11:52 AM
11
11
52
AM
PST
1 2

Leave a Reply