[ID Found’ns Series, cf. also Bartlett here]
Irreducible complexity is probably the most violently objected to foundation stone of Intelligent Design theory. So, let us first of all define it by slightly modifying Dr Michael Behe’s original statement in his 1996 Darwin’s Black Box [DBB]:
What type of biological system could not be formed by “numerous successive, slight modifications?” Well, for starters, a system that is irreducibly complex. By irreducibly complex I mean a single system composed of several well-matched interacting parts that contribute to the basic function, wherein the removal of any one of the [core] parts causes the system to effectively cease functioning. [DBB, p. 39, emphases and parenthesis added. Cf. expository remarks in comment 15 below.]
Behe proposed this definition in response to the following challenge by Darwin in Origin of Species:
If it could be demonstrated that any complex organ existed, which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down. But I can find out no such case . . . . We should be extremely cautious in concluding that an organ could not have been formed by transitional gradations of some kind. [Origin, 6th edn, 1872, Ch VI: “Difficulties of the Theory.”]
In fact, there is a bit of question-begging by deck-stacking in Darwin’s statement: we are dealing with empirical matters, and one does not have a right to impose in effect outright logical/physical impossibility — “could not possibly have been formed” — as a criterion of test.
If, one is making a positive scientific assertion that complex organs exist and were credibly formed by gradualistic, undirected change through chance mutations and differential reproductive success through natural selection and similar mechanisms, one has a duty to provide decisive positive evidence of that capacity. Behe’s onward claim is then quite relevant: for dozens of key cases, no credible macro-evolutionary pathway (especially no detailed biochemical and genetic pathway) has been empirically demonstrated and published in the relevant professional literature. That was true in 1996, and despite several attempts to dismiss key cases such as the bacterial flagellum [which is illustrated at the top of this blog page] or the relevant part of the blood clotting cascade [hint: picking the part of the cascade — that before the “fork” that Behe did not address as the IC core is a strawman fallacy], it arguably still remains to today.
Now, we can immediately lay the issue of the fact of irreducible complexity as a real-world phenomenon to rest.
For, a situation where core, well-matched, and co-ordinated parts of a system are each necessary for and jointly sufficient to effect the relevant function is a commonplace fact of life. One that is familiar from all manner of engineered systems; such as, the classic double-acting steam engine:
Fig. A: A double-acting steam engine (Courtesy Wikipedia)
Such a steam engine is made up of rather commonly available components: cylinders, tubes, rods, pipes, crankshafts, disks, fasteners, pins, wheels, drive-belts, valves etc. But, because a core set of well-matched parts has to be carefully organised according to a complex “wiring diagram,” the specific function of the double-acting steam engine is not explained by the mere existence of the parts.
Nor, can simply choosing and re-arranging similar parts from say a bicycle or an old-fashioned car or the like create a viable steam engine. Specific mutually matching parts [matched to thousandths of an inch usually], in a very specific pattern of organisation, made of specific materials, have to be in place, and they have to be integrated into the right context [e.g. a boiler or other source providing steam at the right temperature and pressure], for it to work.
If one core part breaks down or is removed — e.g. piston, cylinder, valve, crank shaft, etc., core function obviously ceases.
Irreducible complexity is not only a concept but a fact.
But, why is it said that irreducible complexity is a barrier to Darwinian-style [macro-]evolution and a credible sign of design in biological systems?
First, once we are past a reasonable threshold of complexity, irreducible complexity [IC] is a form of functionally specific complex organisation and implied information [FSCO/I], i.e. it is a case of the specified complexity that is already immediately a strong sign of design on which the design inference rests. (NB: Cf. the first two articles in the ID foundations series — here and here.)
Fig. B, on the exploded, and nodes and arcs “wiring diagram” views of how a complex, functionally specific entity is assembled, will help us see this:
Fig. B (i): An exploded view of a gear pump. (Courtesy, Wikipedia)
Fig. B(ii): A Piping and Instrumentation Diagram, illustrating how nodes, interfaces and arcs are “wired” together in a functional mesh network (Source: Wikimedia, HT: Citizendia; also cf. here, on polygon mesh drawings.)
We may easily see from Fig. B (i) and (ii) how specific components — which may themselves be complex — sit at nodes in a network, and are wired together in a mesh that specifies interfaces and linkages. From this, a set of parts and wiring instructions can be created, and reduced to a chain of contextual yes/no decisions. On the simple functionally specific bits metric, once that chain exceeds 1,000 decisions, we have an object that is so complex that it is not credible that the whole universe serving as a search engine, could credibly produce this spontaneously without intelligent guidance. And so, once we have to have several well-matched parts arranged in a specific “wiring diagram” pattern to achieve a function, it is almost trivial to run past 125 bytes [= 1,000 bits] of implied function-specifying information.
Of the significance of such a view, J. S Wicken observed in 1979:
Indeed, the implication of that complex, information-rich functionally specific organisation is the source of Sir Fred Hoyle’s metaphor of comparing the idea of spontaneous assembly of such an entity to a tornado in a junkyard assembling a flyable 747 out of parts that are just lying around.
Similarly, it is not expected that if one were to do a Humpty Dumpty experiment — setting up a cluster of vials with sterile saline solution with nutrients and putting in each a bacterium then pricking it so the contents of the cell leak out — it is not expected that in any case, the parts would spontaneously re-assemble to yield a viable bacterial colony.
But also, IC is a barrier to the usual suggested counter-argument, co-option or exaptation based on a conveniently available cluster of existing or duplicated parts. For instance, Angus Menuge has noted that:
For a working [bacterial] flagellum to be built by exaptation, the five following conditions would all have to be met:
C1: Availability. Among the parts available for recruitment to form the flagellum, there would need to be ones capable of performing the highly specialized tasks of paddle, rotor, and motor, even though all of these items serve some other function or no function.
C2: Synchronization. The availability of these parts would have to be synchronized so that at some point, either individually or in combination, they are all available at the same time.
C3: Localization. The selected parts must all be made available at the same ‘construction site,’ perhaps not simultaneously but certainly at the time they are needed.
C4: Coordination. The parts must be coordinated in just the right way: even if all of the parts of a flagellum are available at the right time, it is clear that the majority of ways of assembling them will be non-functional or irrelevant.
C5: Interface compatibility. The parts must be mutually compatible, that is, ‘well-matched’ and capable of properly ‘interacting’: even if a paddle, rotor, and motor are put together in the right order, they also need to interface correctly.
( Agents Under Fire: Materialism and the Rationality of Science, pgs. 104-105 (Rowman & Littlefield, 2004). HT: ENV.)
In short, the co-ordinated and functional organisation of a complex system is itself a factor that needs credible explanation.
However, as Luskin notes for the iconic flagellum, “Those who purport to explain flagellar evolution almost always only address C1 and ignore C2-C5.” [ENV.]
And yet, unless all five factors are properly addressed, the matter has plainly not been adequately explained. Worse, the classic attempted rebuttal, the Type Three Secretory System [T3SS] is not only based on a subset of the genes for the flagellum [as part of the self-assembly the flagellum must push components out of the cell], but functionally, it works to help certain bacteria prey on eukaryote organisms. Thus, if anything the T3SS is not only a component part that has to be integrated under C1 – 5, but it is credibly derivative of the flagellum and an adaptation that is subsequent to the origin of Eukaryotes. Also, it is just one of several components, and is arguably itself an IC system. (Cf Dembski here.)
Going beyond all of this, in the well known Dover 2005 trial, and citing ENV, ID lab researcher Scott Minnich has testified to a direct confirmation of the IC status of the flagellum:
Scott Minnich has properly tested for irreducible complexity through genetic knock-out experiments he performed in his own laboratory at the University of Idaho. He presented this evidence during the Dover trial, which showed that the bacterial flagellum is irreducibly complex with respect to its complement of thirty-five genes. As Minnich testified: “One mutation, one part knock out, it can’t swim. Put that single gene back in we restore motility. Same thing over here. We put, knock out one part, put a good copy of the gene back in, and they can swim. By definition the system is irreducibly complex. We’ve done that with all 35 components of the flagellum, and we get the same effect.” [Dover Trial, Day 20 PM Testimony, pp. 107-108. Unfortunately, Judge Jones simply ignored this fact reported by the researcher who did the work, in the open court room.]
That is, using “knockout” techniques, the 35 relevant flagellar proteins in a target bacterium were knocked out then restored one by one.
The pattern for each DNA-sequence: OUT — no function, BACK IN — function restored.
Thus, the flagellum is credibly empirically confirmed as irreducibly complex.
The “Knockout Studies” concept — a research technique that rests directly on the IC property of many organism features –needs some explanation.
102 Replies to “ID Foundations, 3: Irreducible Complexity as concept, as fact, as [macro-]evolution obstacle, and as a sign of design”
my compliments: you are doing great work!
“Violently?” Where’s the violence? Perhaps you mean “vigorously.”
It is objected to because it is the basis of many fallacious arguments.
That’s an example of a fallacious argument. No evolutionist supposes that you can start with a mere pile of parts, and it will magically assemble itself into an organism.
Neil Rickert, your comment at  is deliciously ironic, but I don’t think you intended it to be.
You state that irreducible complexity is the basis of many fallacious arguments, and then you attempt to demonstrate your assertion by suggesting that the following argument advanced by kairosfocus is fallacious: “You can start with a mere pile of parts and it will magically assemble itself into an organism.”
You rebut kairosfocus by stating: “No evolutionist supposes that you can start with a mere pile of parts, and it will magically assemble itself into an organism.”
Here’s the irony. Kairosfocus never said or suggested or implied that “You can start with a mere pile of parts and it will magically assemble itself into an organism.”
Therefore, in attempting to demonstrate a fallacious argument based on the concept of irreducible complexity, you have yourself engaged in a fallacious argument commonly employed by Darwinists such as yourself, namely the strawman fallacy.
Oh, and by the way Neil Rickert, before you criticize kairosfocus’ usage of the word “violent,” perhaps you should look it up in the dictionary. “Violent” can mean “acting with rough force,” but my dictionary says that an alternate meaning is “roughly or immoderately vehement or ardent,” which is the meaning implied by kairosfocus’ statement.
Thanks for your thoughts.
Pardon, however, but on your first point, when people’s careers are being unjustifiably busted, slander is being routinely used and courts — an explicit instrument of state force — are issuing slander and strawman based unjust rulings under false colour of law, VIOLENT seems to be the unfortunately apt term. (That is if you insist on sense 1 from Mr Arrington. I did in fact primarily intend sense 2, where verbal violence is a serious concern.)
You make a general remark about fallacies.
Could you kindly specify just what fallacies, with cases in point — a vague, adverse generalisation is it self a fallacy, is it not?
I see you have asserted, though, that my note that the double acting steam engine — a case in point of a system with an irreducible core of key parts — is not explained by the mere presence of components, is a fallacy.
You will note, however [on pain of a strawman], that I further pointed out that a highly specific, complex arrangement — organisation — of the relevant, well-matched parts is required for a functional steam engine to exist. Such an arrangement, even where there may be sub-components that do other things of interest [e.g. pipes, valves, wheels, rods], is maximally unlikely to result from the equivalent of a tornado passing through a junkyard, as Hoyle so correctly noted.
On massive inductive experience, irreducibly complex systems that have an implied scope of information beyond the FSCI threshold [1,000 yes/no decisions to pick, match and arrange the core parts] where we see the causal process directly, are invariably and reliably the product of design.
Going back to the 747 example: if you see a flyable jumbo jet, you do not infer to a tornado in a junkyard in Seattle, but to a certain company known as Boeing.
Taking the matter down to micro-level, I have for some years in my always linked note hosted a thought experiment discussion on the spontaneous assembly of a micro-jet from parts diffused though a liquid medium.
In effect, the issue at stake there is the spontaneous undoing of diffusion to clump then to sort and organise into [relatively exceedingly rare] functional configurations, in a very large config space. The message is the same, when we scale it down: a complex arrangement of parts that is functionally specific and irreducibly complex, for good reason, is maximally unlikely to emerge by chance and necessity without intelligent direction. [Cf previous discussion here.]
Or the Humpty Dumpy experiment can be undertaken: on the same relevant principles of statistical thermodynamics, we can easily see that the pricked bacteria cell components will diffuse through the medium and will be utterly unlikely ever to spontaneously re-assemble as a living cell. Indeed, even just an ink dot in a vat will diffuse and on the gamut of the universe’s lifespan, will not spontaneously re-clump together.
When we turn to the specific biologically relevant cases that have been highlighted, can you kindly provide a better explanation of the flagellum’s origin than Mr Miller’s inadequate T3SS suggestion, and a better explanation of the evolutionary origin of the blood clotting cascade; one that does not set up and knock over a strawman?
Similarly, could you provide a technically detailed, empirically well-warranted, step by step explanation for the Darwinian origin of the Avian wing and flight feathers, with muscles, bones and neural controls?
Similarly, for the origin of the avian one-way flow lungs and associated systems?
Not just so stories with one or two tacked on illustrations, but a solid, step by step genetically and anatomically anchored account. (I will waive Behe’s requirement that the matters be published in the peer reviewed literature; just provide sound, well-evidenced explanations that give enough details. Specific, step by step — no big jumps or major interpolations — fossil evidence would be an asset.)
Failing that, the evidence we do have is that what has happened is that the evolutionary materialistic frame has been imposed by the backdoor route of a priori methodological naturalism, structurally biasing he results of inquiry, AKA begging the question. Then, some sort of mechanism that is more or less similar to Darwin’s ideas will be required by logical consequence.
Philip Johnson’s rebuttal to such is apt:
We would welcome your details.
GEM of TKI
Neil Rickert, it appears you have unwisely done this:
PS: Just how well-matched pressure parts have to be is seen from the way high pressure steam leaks (steam proper is invisible) are sometimes searched for: with a broomstick — the jet of hot invisible gas lops off the end like a knife.
No, “vigorously” would mean they are actively pursuing establishing the claims of their position- as in demonstrating that blind, undirected chemical processes, ie an accumulation of genetic accidents, can produce the structure in question.
And that ain’t happening. If it is their failures are not being published.
Correct, they generally don’t have anything to say about the origin of living organisms beyond that it happened.
F/N: The [unmet] OOL challenge . . .
F/N: I have added a dictionary link on “violently.” Cf. esp senses 2, 3 and 5 under AmHD and senses 3 – 6 for Collins (bracketing the “pond”). 🙂
Here is the key fallacious argument related to IC in this post (once made by the author, and the same fallacy made by Scott Minich in his testimony at Dover):
Regarding the steam engine, the author states
While that may be true, all that means is that the steam engine contains no redundant parts. It does not mean that the system is Irreducibly Complex by Mr. Behe’s original definition.
In order for Irreducible Complexity to be any sort of argument against evolution, it must describe a system where none of it’s parts by themselves could have performed any function that could be selected upon.
Minich makes the same mistake in his testimony regarding the IC of the flagellum:
Again, all that means is that the flagullum contains some parts that are now indispensable to perform a certain function. It does not mean that those parts couldn’t have evolved to become the vital core to motility that they are now.
Thanks for your comment.
As the OP specifically discusses, the components and sub-assemblies may have relevant functions without being properly organised and interfaced to form a properly integrated, well-matched whole. (Observe conditions C1 – 5 above, which you have not addressed.)
What you have done is to accept and present a strawmannising talking point by objectors that misrepresents materially what the design scientists have said on the record since 1996.
If you doubt me, look at the top of the post for the definition by Behe [note I added just the word, “core” to emphasise the context], and the context of response to Darwin’s claim.
For the functional core of an entity to be IC, it has to have a cluster of parts that are each necessary and when together in proper arrangement and interfacing are jointly sufficient for the core function.
That various parts or sub-assemblies may work otherwise does not mean that they will work in this entity. On the steam engine, rods, wheels, cylinders, valves, etc have separate function, but unless they are organised and interfaced properly, they will not work as a double-action steam engine. And, it is the parts and their organisation and proper meshing that all need to be explained. (In addition, on the flagellum the device is sequentially self-assembling. Also, that this is a claimed product of evolution on chance plus necessity through darwinian mechanisms needs to be documented in step by step evidenced details, not simply asserted.)
That is what C1 – 5 are about.
It is not a wise move to allow objectors noted for strawman tactics [cf UD’s Weak Argument Correctives top right this and every UD page] to redefine an entity to their rhetorical advantage, especially when the original was clear and had a specific, contextual meaning.
GEM of TKI
“In order for Irreducible Complexity to be any sort of argument against evolution, it must describe a system where none of it’s parts by themselves could have performed any function that could be selected upon.” (emphasis mine)
Despite the fact that Behe describes parts of IC systems as having functions of their own (like microtubules), people still buy into this myth 15 years after Behe made it clear that it’s the system’s original function that’s in question – not the parts.
A TTSS may have it’s own separate function but the rotary motor is out of the picture.
I notice that when Ken Miller is confronted with this, he tends to just change the subject and say something along the lines of “…well, if that’s what Behe meant, parts could’ve been co-opted so IC becomes meaningless…”
To think that he accuses us of “assuming what we’re trying to prove” in “The Flagellum Unspun” as if he’s not guilty of the same thing.
1- IC is not an argument against evolution, just the blind watchmaker.
2- Dr Behe has already addressed your concerns:
Irreducible Complexity is an Obstacle to Darwinism Even if Parts of a System have other Functions:
F/N: Let’s annotate Behe’s def’n from DBB, 1996, p. 39, as slightly amplified, and in the context of Darwin’s key statement he responded to:
CRD (1872): >> If it could be demonstrated
a –> Note, strength of proof demanded, i.e. deck-stacking begins
that any complex organ existed,
b –> complexity is highlighted, and would refer to multiple parts that work together to achieve a whole function.
c –> Onward context is plainly Paley’s inference to design of organs and features of organisms, which CRD sought to overthrow through his theory of evolution by cumulative small changes.
d –> Paley, ch I of Nat Theol, famously contrasted stumbling across a stone in a field with finding a watch in the same field; inferring design from the characteristics of a watch that are distinct from those of a stone.
e –> Especially, that:
which could not possibly have been
f –> CRD further stacks the deck, i.e. if we take at face value, only logical/physical impossibility would count against his theory
formed by numerous, successive, slight modifications,
g –> The theory is premised on small chance changes cumulating through descent with modification and survival of “favoured races” to yield descent with unlimited modifications sufficient to account for the full range of body plans.
my theory would absolutely break down
h –> If you disprove that descent with incremental modifications is enough, then the darwinian theory fails
i –> It purports to be an account of how what seems designed in life is designoid. >>
MB (1996, aug): >> By irreducibly complex
j –> MB builds on Darwin’s highlighted complex organs
k –> He asserts that a certain possible class of complexity — that meets certain criteria — is not reducible to the fine steps incremented
I mean a single system
l –> System, thus parts, inputs, processes, outputs, integration and interaction to yield the whole function
composed of several well-matched interacting parts
m –> specifies multiple parts, that must match, i.e interfacing, organisation and integration to work together are key
that contribute to the basic function,
n –> focus is on a functional core to which the key parts jointly contribute
wherein the removal of any one of the [core] parts
o –> Each [core] part makes a necessary contribution to function
p –> the test of that causal necessity is that the part is removed [cf Minnich’s knockout studies]
q –> THE PARTS ARE EACH CAUSALLY NECESSARY TO THE CORE OR BASIC FUNCTION
the system to effectively cease functioning.
r –> removal of any and each such part triggers functional failure
s –> replacement would return its contribution [cf Minnich again]
t –> So, if we have parts 1 . . . n, where each is necessary and the lot jointly assembled and integrated are sufficient for function the core function is all-or-none. it has to be all together at once or it does not work.
u –> All parts must be simultaneously present, must be well-matched and must be properly networked and interfaced for function to result.
v –> So, even if all parts are present but they are not properly organised and interfaced, function will not emerge.
w –> Co-option then faces the issues of correct assembly and functional interfacing, without which parts of relevant function will not work together.
x –> Without foresight, once a systemic whole required ]s multiple parts, even if relevant parts are available, if they are not matched ahead of time by extreme good luck [utterly unlikely without foresight], a miracle of mutual adaptation has to happen by chance.
y –> Also, the parts have to be somehow assembled to the right location and configuration, which once a fair degree of complexity obtains, is quite difficult: the islands of function in a large configuration space problem. (Think of the over-ambitious boy who dis-assembles his new fishing reel and then needs to correctly re-assemble it.)
z –> Thus,IC is indeed a credible counter to a reasonable [non deck-stacked] form of Darwin’s claim.
[DBB, p. 39, emphases and parenthesis added.] >>
The suggested test cases would be useful ones to bring forward empirically well supported evolutionary solutions that do not strain the issues of reasonable plausibility. No hopeful monsters, in short.
GEM of TKI
I’m sorry but this is a very simple point and I don’t understand why you guys are not getting it.
Irreducible Complexity is meant as an argument against evolution. It is a direct response to Darwin’s statement (which the author of this post quotes):
However according to this post, IC is simply a term for a system in which all the parts are needed for it to perform its current function. In other words, it has zero redundant parts.
Living systems that have zero redundant parts are not an argument against evolution because they could have evolved that way.
So my question to you guys is this:
If a living system exhibits Irreducible Complexity, does that mean that it couldn’t have been a product of evolution?
lastyearon you state:
‘I don’t understand why you guys are not getting it.’
Oh do please enlighten us poor misguided ones lastyearon. Those of us who have the audacity to doubt the almighty power of evolution to create the unmatched complexity we find in life by a process of mere ‘filtered errors’:
Nothing In Molecular Biology Is Gradual – Doug Axe PhD. – video
“Charles Darwin said (paraphrase), ‘If anyone could find anything that could not be had through a number of slight, successive, modifications, my theory would absolutely break down.’ Well that condition has been met time and time again. Basically every gene, every protein fold. There is nothing of significance that we can show that can be had in a gradualist way. It’s a mirage. None of it happens that way. – Doug Axe PhD.
in spite of the fact of finding molecular motors permeating the simplest of bacterial life, there are no detailed Darwinian accounts for the evolution of even one such motor or system.
“There are no detailed Darwinian accounts for the evolution of any fundamental biochemical or cellular system only a variety of wishful speculations. It is remarkable that Darwinism is accepted as a satisfactory explanation of such a vast subject.”
James Shapiro – Molecular Biologist
The following expert doesn’t even hide his very unscientific preconceived philosophical bias against intelligent design,,,
‘We should reject, as a matter of principle, the substitution of intelligent design for the dialogue of chance and necessity,,,
Yet at the same time the same expert readily admits that neo-Darwinism has ZERO evidence for the chance and necessity of material processes producing any cellular system whatsoever,,,
,,,we must concede that there are presently no detailed Darwinian accounts of the evolution of any biochemical or cellular system, only a variety of wishful speculations.’
Franklin M. Harold,* 2001. The way of the cell: molecules, organisms and the order of life, Oxford University Press, New York, p. 205.
Professor Emeritus of Biochemistry, Colorado State University, USA
Michael Behe – No Scientific Literature For Evolution of Any Irreducibly Complex Molecular Machines
“The response I have received from repeating Behe’s claim about the evolutionary literature, which simply brings out the point being made implicitly by many others, such as Chris Dutton and so on, is that I obviously have not read the right books. There are, I am sure, evolutionists who have described how the transitions in question could have occurred.” And he continues, “When I ask in which books I can find these discussions, however, I either get no answer or else some titles that, upon examination, do not, in fact, contain the promised accounts. That such accounts exist seems to be something that is widely known, but I have yet to encounter anyone who knows where they exist.”
David Ray Griffin – retired professor of philosophy of religion and theology
Poly-Functional Complexity equals Poly-Constrained Complexity
Translation: “A” is not an argument against evolution because evolution could have caused “A”.
That is trivially easy to counter: The system’s original function was different than it’s current function. It evolved.
If you counter with something like “That’s just conjecture. How could something so complex have evolved. Show me the step by step process by which it evolved.” I say, ok. That’s a legitimate question. We don’t know exactly how, and we may never know. Evolutionary biologists are working on finding answers.
However by saying that the individual parts of an IC system could have had their own functions in the past, you effectively remove IC as an objection to evolution.
Thanks for your further comment.
I am afraid, however, a review of the remarks by CRD and MB step by step just above is indicated.
The root issue is the concept of causal necessity and sufficiency for an effect or function.
For instance, heat, fuel and oxidiser are each necessary for, and together are jointly sufficient to form and sustain a fire.
Once we therefore see a case of considerable complexity with a significant number of parts that are necessary for function the parts must each be available, must be brought together, must be correctly arranged and meshed, for the function to emerge. And, each issue is just as important as a roadblock to emergence of the sort of composite function we are discussing.
If there are missing core parts, no function is possible. Without foresight and planning, that is highly improbable.
Even if there are relevant parts that if they were to be properly organised and interfaced would work, the problem is that without foresight, there is utterly unlikely to be a compatibility. Wrong size, wrong orientation of interface points etc etc.
Then, the right parts, well matched on interfaces, have to be properly oriented and assembled on a wiring diagram that is itself complex. That is maximally unlikely by chance and forces of necessity without foresighted direction. Assembly, interfacing, tuning and setting up — fine tuning in short — are hard to do.
Specific ways that Darwinian mechanisms have successfully overcome such hurdles need to be empirically shown, not suggested or assumed.
Darwinian mechanisms by definition lack capacity for foresight, planning and associated intentional provision of properly fitted components and sub-assemblies in light of future needs, and want of plans and means to drive the assembly [and the halting at the precise point where the system has been properly put together] to get the right function all at once, point to a systematic want of capcity to do the required job.
Irreducible complexity is real, it is observed in general and biological systems, and it points away from specifically Darwinian mechanisms.
Perhaps there are other evolutionary mechanisms at work, but to have credibility in the face of the sort of concerns this thread is discussing,they would have to be the sort of Intelligent Evolution, Wallace proposed.
Irreducible Complexity is a sign pointing to foresighted design, whatever means were used to effect it.
GEM of TKI
F/N: Behe believes in common descent, i.e. evolution. Just, plainly and on the evidence of IC, intelligently directed evolution, used as a means to effect a design. Oddly, many Young Earth Creationists believe in — rapid — evolution/ adaptation within about the taxonomic level of the family [cats, dogs, bats, salmonids etc], as a means of designed robustness and fitting to niches.
Why not take up something like the feathers and the wing, musculature and lungs of a bird. What empirically supported mechanisms could substantiate blind watchmaker thesis macroevolution as the source of flying birds, how? [cf p. 2 of OP]
GEM of TKI
Well lastyearon, let’s look at the most famous example of cooption; The T3SS:
Genetic Entropy Refutation of Nick Matzke’s TTSS (type III secretion system) to Flagellum Evolutionary Narrative:
Excerpt: Comparative genomic analysis show that flagellar genes have been differentially lost in endosymbiotic bacteria of insects. Only proteins involved in protein export within the flagella assembly pathway (type III secretion system and the basal-body) have been kept…
Phylogenetic Analyses of the Constituents of Type III Protein Secretion Systems
Excerpt: We suggest that the flagellar apparatus was the evolutionary precursor of Type III protein secretion systems.
“One fact in favour of the flagellum-first view is that bacteria would have needed propulsion before they needed T3SSs, which are used to attack cells that evolved later than bacteria. Also, flagella are found in a more diverse range of bacterial species than T3SSs. ‘The most parsimonious explanation is that the T3SS arose later,” Howard Ochman – Biochemist – New Scientist (Feb 16, 2008)
So lastyearon the ‘unfalsified’ principle of genetic entropy shot down the ‘just so cooption story’ that you find so compelling, not to mention that you are simply incredulous that we would not see how reasonable your position is!!!
Yet lastyearon despite your disbelief at why we can’t see ‘how simple evolution is’, the whole point is that Genetic Entropy IS THE RULE for all beneficial biological adaptions!!! Darwinian evolution certainly IS NOT!!!!, There is not even one exception to this ‘rule’ of genetic entropy i.e. there are no examples whatsoever of the generation of complexity greater than what was already present in life, there are only examples of loss or ‘adjustments’ of function:
“The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain – Michael Behe – December 2010
Excerpt: In its most recent issue The Quarterly Review of Biology has published a review by myself of laboratory evolution experiments of microbes going back four decades.,,, The gist of the paper is that so far the overwhelming number of adaptive (that is, helpful) mutations seen in laboratory evolution experiments are either loss or modification of function. Of course we had already known that the great majority of mutations that have a visible effect on an organism are deleterious. Now, surprisingly, it seems that even the great majority of helpful mutations degrade the genome to a greater or lesser extent.,,, I dub it “The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain.(that is a net ‘fitness gain’ within a ‘stressed’ environment i.e. remove the stress from the environment and the parent strain is always more ‘fit’)
Michael Behe talks about the preceding paper on this podcast:
Michael Behe: Challenging Darwin, One Peer-Reviewed Paper at a Time – December 2010
Evolution Vs Genetic Entropy – Andy McIntosh – video
If a living system exhibits Irreducible Complexity, does that mean that it couldn’t have been a product of evolution?
I would like to get an answer to this for my own clarification.
kairosfocus, you said:
This seems to contradict what Mr. Behe says, according to F2XL:
Which is it? Can parts of IC systems have functions of their own?
Please,look at the discussion in the OP and above on the difference between [Neo-]darwinian evolutionary mechanisms assumed as capable of causing biodiversity from pond scum to us, and a belief that natural history has in it significant descent with modification in part caused by design.
As I noted in 21, Behe believes in common descent [universal, I think], and YEC’s often believe in rapid specialisation/speciation to fit niches. Their favourite example is the Dog/Wolf family.
In the OP, on p 2, there is a significant excerpt from Wallace, co-founder of evolutionary theory, and a proponent of “intelligent evolution” in his linked magnum opus. Observe his discussion of birds.
then, you may wish to propose some empirical evidence on the adequacy of blind watchmaker type [neo-] darwinian mechanisms and their various extensions adequate and empirically supported as so adequate, to account for birds.
If you want instead to talk about flagella, account credibly for the origin of T3SS [and how it seems that in at least certain cases there are genes for the flagellum present, just not enabled or expressed], and then how it onward leads to the full flagellum and control system that allows the bacterium to use the flagellum to move towards nutrient sources.
Similarly, on the relevant part of he blood clotting cascade, noting that if blood was not right from the first in this regard, it would not work, and animals with a problem would tend to bleed to death.
Then, you might want to account for he origin of the code-based self-replication system that is central to cell-based life; which is also irreducibly complex.
Perhaps, you could link or excerpt.
GEM of TKI
F/N: such accounts should not exert the CRD deck-stacking standard.
In context I am plainly speaking specifically of the core function of the composite system.
For instance, I speak in the OP and the thread above, of exaptation of sub systems that have their own functions, pointing out the issue of well-matched interfacing.
On the steam engine example, you will see i point out how the components have all sorts of existing function, and even suggest sitting around with a bicycle and a car. The problems of integration and functional organisation then stand out, and the issue that sub-assemblies have to be interfaced, coordinated, and properly arranged was highlighted.
In short, you have (doubtless inadvertently) projected a strawman based on what the objectors you have heard from taught you to see.
They have misrepresented design thought, and have too often been resistant to correction when that has been pointed out.
GEM of TKI
Thanks for your detailed replies. However you are now simply arguing that some systems are sooo complex that you can’t imagine how they could have evolved. Your incredulity is not an argument against evolution.
On the other hand, Michael Behe attempts to make a scientific argument against evolution with his concept of Irreducible Complexity. According to Behe, IC systems by definition could not have evolved in a stepwise fashion, since an IC system needs all of its parts to function.
I have shown (as have many other people) that this is indeed incorrect in principle, since even if something needs all its parts to function the way it does now, that does not prove that it didn’t evolve from a collection of subset systems with different functions.
I’m less interested in trying to prove to you that complexity can evolve. I’m simply interested in making it clear IC is not a true objection to evolution.
lastyearon you state,
‘I’m less interested in trying to prove to you that complexity can evolve.’
You can’t prove it even if you wished to for there is ZERO evidence for the ‘evolution’ of any complexity greater than what was already present!!!
Is Antibiotic Resistance evidence for evolution? – ‘The Fitness Test’ – video
In fact Scott Minnich mentions in the video,,,
Bacterial Flagella: A Paradigm for Design – Scott Minnich – Video
,,, that it would be impossible to study the genetic basis of the Flagellum unless it were indeed irreducibly complex, since irreducible complexity of the construction and function of the flagella allows them to determine which genes are responsible for which stage of construction and/or of operation of the flagellum!
Bacterial Flagellum: Visualizing the Complete Machine In Situ
Excerpt: Electron tomography of frozen-hydrated bacteria, combined with single particle averaging, has produced stunning images of the intact bacterial flagellum, revealing features of the rotor, stator and export apparatus.
save for the fact that the lastyearon you then state;
‘I’m simply interested in making it clear IC is not a true objection to evolution.’
What you actually trying to do is construct an impossible benchmark for falsification for evolution,, as Dr. Behe states in this video, evolution is notorious for its lack of standards for falsification,,
Michael Behe on Falsifying Intelligent Design – video
And yet ID readily submits itself for falsification!!! Tell me lastyearon, which other theory in science refuses to submit to falsification?
The Law of Physicodynamic Insufficiency – Dr David L. Abel – November 2010
Excerpt: “If decision-node programming selections are made randomly or by law rather than with purposeful intent, no non-trivial (sophisticated) function will spontaneously arise.”,,, After ten years of continual republication of the null hypothesis with appeals for falsification, no falsification has been provided. The time has come to extend this null hypothesis into a formal scientific prediction: “No non trivial algorithmic/computational utility will ever arise from chance and/or necessity alone.”
The GS (genetic selection) Principle – David L. Abel – 2009
Excerpt: Stunningly, information has been shown not to increase in the coding regions of DNA with evolution. Mutations do not produce increased information. Mira et al (65) showed that the amount of coding in DNA actually decreases with evolution of bacterial genomes, not increases. This paper parallels Petrov’s papers starting with (66) showing a net DNA loss with Drosophila evolution (67). Konopka (68) found strong evidence against the contention of Subba Rao et al (69, 70) that information increases with mutations. The information content of the coding regions in DNA does not tend to increase with evolution as hypothesized. Konopka also found Shannon complexity not to be a suitable indicator of evolutionary progress over a wide range of evolving genes. Konopka’s work applies Shannon theory to known functional text. Kok et al. (71) also found that information does not increase in DNA with evolution. As with Konopka, this finding is in the context of the change in mere Shannon uncertainty. The latter is a far more forgiving definition of information than that required for prescriptive information (PI) (21, 22, 33, 72). It is all the more significant that mutations do not program increased PI. Prescriptive information either instructs or directly produces formal function. No increase in Shannon or Prescriptive information occurs in duplication. What the above papers show is that not even variation of the duplication produces new information, not even Shannon “information.”
f/n lastyearon, here is the standard that neo-Darwinism, by all rights of scientific integrity, should submit itself to;
The Universal Plausibility Metric (UPM) & Principle (UPP) – Abel – Dec. 2009
Excerpt: Mere possibility is not an adequate basis for asserting scientific plausibility. A precisely defined universal bound is needed beyond which the assertion of plausibility, particularly in life-origin models, can be considered operationally falsified. But can something so seemingly relative and subjective as plausibility ever be quantified? Amazingly, the answer is, “Yes.”,,,
c?u = Universe = 10^13 reactions/sec X 10^17 secs X 10^78 atoms = 10^108
c?g = Galaxy = 10^13 X 10^17 X 10^66 atoms = 10^96
c?s = Solar System = 10^13 X 10^17 X 10^55 atoms = 10^85
c?e = Earth = 10^13 X 10^17 X 10^40 atoms = 10^70
as well lastyearon, all neo-Darwinists should, by all rights of scientific integrity, gracefully accept this experimental proof as falsification of their beloved theory;
The Case Against a Darwinian Origin of Protein Folds – Douglas Axe – 2010
Excerpt Pg. 11: “Based on analysis of the genomes of 447 bacterial species, the projected number of different domain structures per species averages 991. Comparing this to the number of pathways by which metabolic processes are carried out, which is around 263 for E. coli, provides a rough figure of three or four new domain folds being needed, on average, for every new metabolic pathway. In order to accomplish this successfully, an evolutionary search would need to be capable of locating sequences that amount to anything from one in 10^159 to one in 10^308 possibilities, something the neo-Darwinian model falls short of by a very wide margin.”
The Case Against a Darwinian Origin of Protein Folds – Douglas Axe, Jay Richards – audio
The picture of the engine, while very pretty, shows that you still dont understand evolution. If you remove a part it will no longer function as a steam engine, but it may function as something else.
The picture of the engine, mouse-trap, etc, etc, etc, all appeal to our (unspoken) expectation that devices only have a single purpose, but evolution doesnt care if it changes its function.
Re: you are now simply arguing that some systems are sooo complex that you can’t imagine how they could have evolved. Your incredulity is not an argument against evolution.
First, I note you have not produced the sort of evidence or even serious links that would point us to the evidence that would actually ground the idea that blind watchmaker type darwinian mechanisms can do what is being claimed. So, on the strength of rhetoric instead, we are being invited to accept Darwin’s deck-stacking argument and that on a sarcastic dismissal of asking for the gold standard of scientific warrant: empirical observation and related inference to best explanation.
Sorry, but when the scientific warrant is lacking one is entitled to a critical view, i.e. to incredulity.
Anyway, let me not stand on my epistemic rights but address issues in steps.
1 –> First, pardon, have you read the OP at all? [The background of the two previous posts in this foundation series, here and here?] The specific exposition in 15? With all due respect, you are now beginning to sound like a Darwinist talking points tape stuck on the “repeat” loop. Let me assume that you are not just spouting talking points to distract the thread but genuinely do not understand the gaping holes in a dismissal like you just cited.
2 –> Notice, first and foremost, all the systems in view are not merely complex but functional in ways that depend on very specific organisation. That puts them on very tightly defined zones in large — very very large, often — configuration spaces.
3 –> How large? If there are more than 1,000 yes/no decisions to specify the components, nodes, interfaces and interconnexions of a complex functionally specific entity, we have something like 1.07*10^301 possible configs. [DNA for living cells starts at over 100,000 – 1,000,000 bits of storage capacity, and that does not reckon with the functional complexity of the organised host cell that has to give DNA its effect.]
4 –> The 10^80 or so atoms of the whole observed universe, changing state every Planck time (shortest duration that makes sense) could only scan through 10^150 states across the cosmos’ thermodynamically credible lifespan, i.e less than 1 in 10^150 of the possible configs. In short,a cosmos-scope search is a zero size search of such a space, for all practical purposes.
5 –> Just 20 or so typical words of ASCII text [~ 125 bytes] are enough to put us into that too large to search territory. That is why when we see long enough runs of functional English text such as this post, we habitually infer to intelligence as the only observed and credible way to get that sort of functional, complex specific organization. Something that is literally backed up by billions of tests and no credible counter-examples.
6 –> In short FSCO/I is on induction from a very large observation base a reliable sign of design as cause, and the config space analysis just given backs it up in spades.
7 –> This is an inference on what is known and credible [indeed, the reasoning is very close to the statistical grounds for the second law of thermodynamics], not on some mythical incredulity that will not swallow the kind of deck-stacking Darwin indulged in the original post.
8 –> When we specifically deal with irreducible complexity, we are again dealing with a huge technological database, where we know that a lot of systems are made up from functional parts that are co-tuned to work together in very exactingly specific configs, to make clocks, cars, bicycles, aeroplanes, helicopters, radios, PC’s, double-acting steam engines and much more.
9 –> These highly functionally specific and irreducibly complex systems provide a rich base of experience for seeing that such FSCO/I and IC systems are designed as the only directly known source of that sort of system. So, on induction alone, we would have good positive reason to suspect or even strongly believe that FSCO/I and IC systems are best explained as designed.
[ . . . ]
10 –> Now it so happens that we are dealing with living, cell based systems. So, the first IC issue is that we see that the living cell is a metabolising system that on a stored code is able to replicate itself through implementing a von Neumann self-replicator, requiring:
11 –> Also, as pointed out, parts (ii), (iii) and (iv) are each necessary for and together are jointly sufficient to implement a self-replicating machine with an integral von Neumann universal constructor. That is, we see here an irreducibly complex set of core components that must all be present in a properly organised fashion for a successful self-replicating machine to exist.
12 –> No self-replication,no life. And we know the source of codes, algorithms etc. We have already seen that the constraints of IC strongly block Darwinian type pathways — not the same as all evolutionary pathways [you don’t seem to be taking on board some important and longstanding distinctions] — but the problem here is that if this one is not solved, you cannot get to differential reproductive success as you do not have the possibility of reproduction.
13 –> So, we have excellent positive grounds on inference to best, empirically anchored explanation, to infer to the known source of such: design. Regardless of who the candidate designers may be, we have grounds for seeing the signs of design at work in the foundations of life. INCLUDING IN THE CAPACITY OF LIVING CELLS TO REPLICATE THEMSELVES.
14 –> Going beyond that, an examination of your onward post shows no sign of actual warrant for the claim, just a repetition of the deck-stacking, dismissive assertion, e.g.: even if something needs all its parts to function the way it does now, that does not prove that it didn’t evolve from a collection of subset systems with different functions.
15 –> This simply brushes aside the issue of integration, configuration and interfacing to get the function, in a context where he only credible source of such close matching of components is foresight that planned it that way.
16 –> In short the effective infinity of ways to mismatch is so vastly larger than the few ways to match and work together, being properly organised, that your dismissal is counter productive.
17 –> That is, it boils down to a miracle of luck beyond the credible capacity of the cosmos to deliver such luck.
18 –> Not once, but dozens of times over in the typical organism and its body plan.
19 –> Multiplied by dozens and dozens of major body plans and hundreds of major organ systems.
Simply on Occam’s razor, it is plain that design is a far simpler and better warranted account than hundreds and hundreds of times over winning lotteries beyond the search resources of he observed cosmos, much less one tiny planet in it.
In short the very nature of he objections you are making speaks eloquently that the issue on the merits strongly points to design as the best explanation for the many instances of IC in life.
GEM of TKI
OT kf; you may really enjoy this song sung by this talented 13 year old:
Greyson Chance Performs “Waiting Outside The Lines” on Ellen
But this is just crazy talk. If you’ve got available monkeys, and you also have available typewriters, you’ll get Shakespeare.
Your incredulity has no bearing!
I am indeed glad that you have been given the opportunity to post at uncommon descent! I have always enjoyed your comments here. Keep up the good work!
Thanks, a young talent like that should be encouraged.
BTW, you once pointed to a quantum version of the Young double slit expt that showed the time reversal effect in action, I believe a certain Dr Quantum may have been involved.
Where is that at? [About 5 mos ago, I had a HD crash and lost a lot of stuff.]
Indeed, infinite monkeys with infinite typewriters, infinite time and infinite paper forests and factories, fuelled by infinite banana plantations can indeed reproduce all books, web posts etc ever written, by pure chance.
But then, how are we ever going to find the islands of functional books in the chaos of an infinite sea of nonsense on paper?
[Here we see a function of mind: zeroing in on the islands of function.]
And, where are we going to get such a plenitude of convenient infinities from?
Thanks for some kind words of encouragement.
GEM of TKI
kf, I believe this is the Dr. Quantum video you are referring to:
Dr. Quantum – Double Slit Experiment & Entanglement – video
Matteo, lastyearon et al:
Looking at your comments I had the following conclusion:
For evolution, nothing is impossible.
Evolution can do anything.
Any biological system, known and still unknown, was generated by evolution. Even if it was designed by a scientist (e.g. GM crops), it still could have been evolved.
Until you give me a particular system in biology that I cannot imagine how it could have evolved, evolution remains the only scientific explanation.
The mere fact, that you do not even consider that the explanation for an IC system is at least a challenge to the evolutionary framework is scary. You cite incredulity, but at the same time are willing to believe anything that follows from the blind watchmaker hypothesis.
Thanks ever so much, I think I have a use for that. There was also a web site out there that alas I have lost with that HD crash. (If I did not have a Jolicloud Linux partition . . . )
I think Matteo had his tongue firmly in cheek, i.e. his remark was satirical intended to point to a reductio ad absurdum.
I took up the invitation.
Let’s see what LYO, NR et al have to say onward.
Have fun for now . . .
PS: Anybody seen a pickup from the usual objectors out there in the evo mat blogosphere? That might be useful . . . as they perceive IC as ID’s weakest yet most threatening point, I think.
kf, my pleasure. The one thing I love about the Quantum foundation of our 3-D reality is that it firmly shows that ‘miracles’ are not precluded from really happening. Because QM shows that the foundation of our 3-D reality blatantly defies our concepts of time and space. Most people consider defying time and space to be supernatural ‘miraculous’ event. I know I certainly do! Therefore I hold that our 3-D reality is indeed based on a ‘supernatural miraculous reality’ not constrained by time or space.
27 Amazing Miracles in Real Life – Readers Digest
Sarah McLachlan-Ordinary Miracle
I think that the principle of IC is not a direct refutation of evolution, but it, like many other things, makes evolution very unlikely. Allow me to give an analogy.
It has been said that an arch is an irreducably complex system because two parts (the legs) depend on each other. Yet an arch could come about in a stepwise fashion.
You find arches in nature. I enjoy going to Arches National Park in southern Utah.
I don’t know of anyone who believes that those arches are designed.
But I would say that the principle of IC would say that they are unlikely in nature.
But now, imagine this: you find an arch on top of an arch. Surely you’d agree that this is even more unlikely? What about 3 arches all on top of each other? Isn’t it implausible that one of them hasn’t crumbled by now? What about 12 or a thousand arches all stacked up on each other? I would say that theoretically it is possible that it could happen, but it is very very unlikely. Evolutionists might say, “wait, what about deep time! You never know!”
You’re right, I’d never know, but I would point out that geology is also subject to deep time and all we ever get are single arches (that I know of).
UPDATE: In further response to LYO, have added Fig. B(ii) on P & ID diagrams, with some further clarification on functional organisation. I have cited Wicken, 1979, and further underscored how functional organisation, even when based on existing components, is itself an information-rich step to an irreducibly complex entity that has to be explained. (It seems the previous reference to bicycle and car components being converted into steam engines was perhaps not enough to make the specific point clear.) GEM of TKI
An interesting example.
I note, the arches in nature — usually formed by erosion — are not made from discrete parts.
Thus, in the relevant sense, such is not complex. perhaps Dembski’s elaborated form of the IC definition will help amplify:
If I saw an arch formed out of what seemed to be ashlars or bricks mortared together [i.e. it is now complex], in a claimed natural situation, I would be “suspicious.”
But an arch that seems to be the result of erosion is not suspicious.
Agreed. I wonder if lastyearon will say that it would be less suspicious if the legs were made of a co-opted steam engine parts.
H’mm: I forgot. In UTech, Jamaica’s sculpture park, the sculpture for engineering is an Iron Horse made out of junkyard car parts. No tornadoes or hurricanes were detained as suspects. G
Interesting series kairos.
My first post here. I don’t reject evolution but logic of your posts is quite powerful. I would like to see biologist come here and try to bring it down.
While ago I came across http://udel.edu/~mcdonald/mousetrap.html . Professor McDonald used animated pictures and what seemed a very sound logic while trying to refute Behe’s irreducibly complex mousetrap idea. Miller used this example as well. Being practical guy I decided to test step 1 on that page. After all how complicated can it be to bend some wire? Boy, was I ever surprised. After hours of hammering pieces of wires I learned a few things.
First, you have to come up with lots of good excuses to your wife for why are you making banging noises in the garage for hours.
Second, if you want to make functional mouse trap from a piece of wire many things are critical. OTOH there is nothing critical if you need just bent piece of wire.
Critical requirements will “make or break” functional mouse trap. They have to happen at the same time, too.
Critical things are wire material, its thickness (diameter), length, size (curvature) of bends, alignment of ends.
For example: at first look two wires could look the same but small end misalignment will make one just piece of bent wire and the other one a functional mouse trap.
Step one was extremely difficult, I couldn’t even think of step two.
At the end, I fully understand respected professors used wire slowly progressing into a full mouse trap as analogy but I could not resist little fun to see if it really works. Unfortunately, it doesn’t.
Looks like even one component could be “irreducible” if it will be used for specialized function.
No. Irreducible Complexity is meant as an argument against the blind watchmaker, ie blind, undirected chemical processes.
So you didn’t understand what Dr Behe said about that. Interesting.
Yes. I linked to an article by Dr Behe explaining that very thing.
Either you didn’t read it (bad form) or didn’t understand it.
So which is it?
Irreducible complexity is a barrier for blind, undirected chemical processes.
That said depending on the complexity- ie the number of core components- the bigger the barrier.
And seeing that any given living organism is the epitome of IC, then yes their origin is doubtful via blind, undirected chemical processes.
IC systems, including living organisms, could (have) evolve(d) via targeted searches, ie by design.
I had a colleague who a few weeks ago said exactly the same thing, exactly the same way. I thought it was real…
Could you tell us the story?
It would be something to hear . . .
GEM of TKI
Biological Scale Factor?
At smaller scale IC really seems more ‘apparent’ , concrete or obvious. In the ‘Macro scale’ it ‘seems’ more plausible that general ‘parts’ could meld and cobble together. At the smallest scale that we are seeing where the ‘parts’ are indivisible – actual molecules and molecular assemblies – it seems much more constrained and the IC much more glaring.
LastYearOn: “Irreducible Complexity is meant as an argument against evolution”
Incorrect. IC is an argument again blind watchmaker evolution.
Learn the difference.
“However by saying that the individual parts of an IC system could have had their own functions in the past, you effectively remove IC as an objection to evolution.”
That parts of an aircraft could be used for different functions is not an argument against them being used in an aircraft. Their existence neither falsifies nor confirms the hypothesis.
In post #41 from Collin:
And in post #48 from Joseph:
I seem to have made my point to Joseph and Collin. They have agreed that Irreducibly Complex systems could have evolved (although they say it is highly unlikely).
That is all I’m looking for in this thread. I’ll finish by saying that every single event that has ever happened in this universe was highly unlikely, and yet they all happened. The evolution of life on earth is no exception.
LastYearOn: “That is all I’m looking for in this thread. I’ll finish by saying that every single event that has ever happened in this universe was highly unlikely, and yet they all happened. The evolution of life on earth is no exception.”
That’s not a scientific argument.
At any rate, any particular deal of a deck of cards is just as unlikely as another. However, if you pulled up a royal flush ten times in a row at a poker game there would be justifiable reason to suspect that more than “random chance” was at work. This is unlikely on a much higher order of magnitude than a random deal of the deck. Is it “possible?” Why sure it is. Is it plausible. Not in the face of a better explanation, such as intent.
The arguments of ID are that for some features within biological systems, the more plausible explanation is intend.
If you’re happy with the more unlikely explanation, than, why, feel free to hang on to it. But I would keep an open mind.
I wrote a strongly critical review of Jerry Coyne’s Why Evolution Is True at Amazon, which spawned a discussion in which I was the only Darwin critic facing five or six Darwinists, so I can sympathize with your position in this thread.
My take on the whole issue of IC is that while you are correct that IC does not PROVE that Darwinian evolution is false (proving a theory false is notoriously difficult), it does shift the burden of proof. If you or anyone believes that the Darwinian explanation is correct, it’s not enough just to say, “Well, irreducibly complex systems COULD possibly have arisen through the cooption of simpler subsystems that performed different functions,” particularly given the huge number of IC systems in the full range of living organisms. If you want to demonstrate that Darwinism is the correct explanation, you have to provide plausible paths for these IC systems. Otherwise, your belief in the truth of Darwinism is just blind faith.
So if you believe that the flagellum and the blood clotting cascade and feathered flight and sexual reproduction and on and on arose through Darwinian processes, it is up to you to show how each could have arisen through a series of mutations each of which conferred some plausible fitness advantage and was within the probabilistic resources available.
And you should note that this challenge has been laid down since long before Behe wrote Darwin’s Black Box (see Denton’s discussion of the avian lung in Darwinism, a Theory in Crisis), and not one such explanation has been forthcoming from any of the supporters of Darwinism, ever. And yet they continue to insist that “Evolution is a fact.”
UPDATE: I have explicitly pointed out on p.1 of the OP that, under the circumstances of requiring several well-matched components that have to be arranged in particular functional ways according to a wiring diagram, “it is almost trivial to run past 125 bytes [= 1,000 bits] of implied function-specifying information.” (Just think: (a) a technical drawing for a component as a rule easily exceeds 125 bytes, (b) a wiring diagram-type drawing for such components similarly easily exceeds the threshold, (c) just one 300- AA typical chain length protein will be at the threshold, (d) the requirement to send several [for the flagellum, 30 – 40 or so] proteins to a particular location and assemble them in a particular order also easily exceeds that threshold. That is why the IC criterion’s definition does not explicitly need to address the FSCO/I threshold. But, that this is implied — even for something as allegedly simple as a mousetrap (think about how delicately matched the parts have to be for it to reliably work) — should be appreciated.
While I express appreciation for your comments to date, which help us all understand why there is such a sharp difference on the import of irreducible complexity as empirically warranted sign pointing to design, I must observe that we seem to be moving to a The Matrix world discussion.
That is, re, 52:
But, the point of what is logically/physically barely possible in the abstract was never in dispute. For instance, it is logically and physically, empirically possible — and inherently observationally indistinguishable from the world we think we inhabit — that the whole cosmos, including our memories of the past that we think we experienced — was actually created in an instant five minutes ago. Possibility and plausibility on inference to best and most responsible explanation are utterly distinct matters.
(In short, only if — as in the movie The Matrix [and, this is the root of that movie] — we can discover some gap that allows us to see that we are living in a Plato’s Cave world of deceptive shadow-shows, would we empirically justify such an ultimate conspiracist view of reality. Cf. discussions here and here. In short, logical/physical possibility cannot adequately warrant any serious claim about the state of affairs in the real world, on pain of reduction to the Plato’s Cave or Brains in Vats or Russell five minute universe absurdities of the esoteric philosophy seminar room.)
Unfortunately, you therefore have here fallen into the error of conflating logical/physical possibility with empirical plausibility sufficient to achieve warrant on inference to best explanation in light of relevant probabilities given the search resources of our observable cosmos. (As a first reference, kindly read Abel’s remarks on the universal plausibility bound here, in “The Universal Plausibility Metric (UPM) & Principle (UPP).” This — for whatever peer review is worth — is a peer-reviewed article from Theoretical Biology and Medical Modelling 2009, 6:27, and it was already brought to your attention by BA 77.)
1 –> Right from the beginning, in Origin, Darwin in Ch VI — as noted in the OP — unfortunately stacked the deck:
2 –> To compare, it is logically/physically possible for all the O2 molecules in the room in which you sit to spontaneously unmix themselves and flow to one end, leaving you gasping for breath.
3 –> But, such is so utterly unlikely on the distribution of possible configurations that if we were to see a room in which the O2 molecules were all at one end, we would immediately infer that the best, most reasonable explanation, was a deliberate intervention.
4 –> Indeed, when we look at the warrant for the second law of thermodynamics in its statistical form, we see that the principle is: the overwhelmingly most likely observed state of a system, and the overwhelmingly likely trend will be towards clusters of states that dominate the possible distributions of mass and energy under the set macroscopic circumstances.
5 –> So, while the situation where the O2 molecules in a room have all rushed to one end is logically/physically possible, it is so deeply isolated in the cluster of possible states, that on the gamut of the observable universe across its thermodynamically credible lifespan, it is utterly unlikely to occur once spontaneously.
6 –> But, as a thought experiment, if a Maxwell-type Demon sits as gatekeeper at a threshold, he can allow O2 molecules to pass to the relevant [say, RH] side, and block other molecules, eventually achieving:
|| * * * * : o2 o2 o2 o2 ||
7 –> To do so, the demon has set a purpose, applied knowledge based information, and has manipulated the materials, forces and circumstances of nature to achieve that end.
8 –> So, we can see that not event the second law of thermodynamics would be warranted on Darwin’s standard; yet, it is in fact one of the most empirically reliable of all laws of physics: in a closed system, entropy is at least constant and tends to increase, and in an open one it implies that raw injection of energy tends to make entropy increase, also.
9 –> As was discussed on p.1 of the ID Foundations series, no 2 [cf. Fig. A & context], to reliably get constructive work out of energy injected into an open system, the energy has to be coupled in a specific, functional way; much as our demon just above.
10 –> Why is that? A: because of the balance of accessible configurations in the space of possible configs. Once we see a pattern of deeply isolated recognisable islands of function or arrangement, sitting in a sea of overwhelmingly many non-functional configs, the observable universe does not have the search resources to credibly spontaneously get us to the shores of such an island, across its entire lifespan. So, if we are in fact at such an island of function or otherwise special configuration, we should go looking for an intelligent, purposeful explanation.
11 –> To again highlight, if the specification for the island of function takes up just 1,000 bits or 125 bytes, we are looking at 1.07*10^301 possible configs. The whole observed universe of ~10^80 atoms, changing state every Planck time [rounded down to 10^-45 s; about 20 orders of magnitude faster than nuclear particle interactions], for the thermodynamically credible lifespan of ~ 10^25 s [about 50 million times longer than the 13.7 BY said to have elapsed since the Big Bang] would experience only 10^150 states. In short, the scope of spontaneous search is so tiny relative to the space of configs, that we have not even begun to look for the needle in the haystack. the search, for practical purposes, rounds down to zero scope.
12 –> In this context, to rest on what is logically/physically possible as opposed to what is empirically plausible [Abel’s 2009 paper], is to rely on the materialistic equivalent of miracles; often by imposing question-begging criteria such as methodological naturalism.
13 –> Lewontin, inadvertently, shows the question-begging, but often militantly defiant reduction to absurdity (as well as the common underlying motivation) involved in such a resort:
14 –> Such selective hyperskepticism motivated by worldview agendas, is patently absurd and closed-minded, i.e. — unfortunately — irrational.
15 –> It is of a piece with the notion of thinking it reasonable that, by chance, rocks rolling down a hillside by the railway running into Wales by happy coincidence just happen to have fallen into the configuration: WELCOME TO WALES, and thinking one is well-warranted to believe such a message.
16 –> In short, the issue is not that any particular outcome is rare in the cluster of possible configs of atoms in our cosmos, so the one we see is just as reasonable as any other. Instead, it is that the micro-possibilities cluster in blocs, the overwhelming number of which are decidedly non-functional. So, since we have a known, routinely observed [and only observed] source of getting to such deeply isolated clusters of functional configs, the empirically well warranted inference is that that cause is at work, since the alternative, chance, is so overwhelmingly unlikely to get to such.
17 –> To see how corrosive the chance maximisation view is, if chance governs all, it is logically and physically possible that every case where we were thinking we have observed a lawlike regularity that is merest coincidence of chance. So, by giving chance the default like that, if we were consistent, we have no reason to infer to natural law!
18 –> To resolve the situation, the reasonable approach is as was laid out in the ID Foundations series, no 1: we intuitively and instinctively use an explanatory filter that looks for natural regularities first, then chance contingency, then choice contingency; on empirically reliable signs.
19 –> Scientific methods simply systematise that common-sense approach.
In short, the proper question in a scientific context is not what is logically or physically possible, but what is empirically, observationally well-warranted as the best explanation.
And, irreducibly complex, functionally specific complex organisation, on such observationally grounded warrant, is best explained by intentionally directed contingency.
GEM of TKI
KB & BD:
GEM of TKI
The story is, that there is a bright young chap here, with whom I started an argument about ID vs evolution. His position was the carbon copy of LastYearOn’s stand:
Evolution has happened. I do not care about the extreme improbability of the chain of events, after all, the whole universe is improbable, nevertheless it exists. And also, using an intelligent agent to explain the existence of a single protein is not scientific, because it tries to explain something simple with something complex.
He admitted that he can recognise effects of human intelligence but was unwilling to explain how he does it, when he is willing to believe any improbable thing.
I was caught off guard with his demand for the explanation being always simpler than the phenomenon. Later I found the same statement in the Physiscs article in Wikipedia. Ever since I was thinking about it and came to the conclusion that it can be used as a rhetorical trap, since the definition os “simple” and “complex” can be re-defined at any time.
For example, magnets attract iron but do not attract stone. Even a child can understand it. Nevertheless, the full explanation for this phenomenon requires a university degree to understand. Now, is the explanation really simpler than the phenomenon?
the notion that an explanation must be simpler than its effect is nonsense.
I explain this post, and I am a lot more complex than the text of a post!
So, let us correct:
A magnet is actually a relativistic phenomenon . . . in effect, in material part, the left-over part of Coulombic attraction when the moving charges are not in our frame of reference. That puts us past Augustin Coulomb to Albert Einstein and taking an imaginary ride on a beam of light then inferring the consequences thereof: On the Electrodynamics of Moving Bodies. (Yup, the actual original paper, in English translation. You try to figure it out without a College level physics education.)
Then, we have to head for unpaired spins of ferromagnetic materials a la Pauli’s exclusion principle, with a dash of Heisenberg on uncertainty just for fun. And, just what is the spin of an electron, a wavicle that looks like a particle or a wave depending on how you look at it just now? And, more . . .
The fact of a magnet picking up a pin is easy to demonstrate.
Its explanatory cause, on serious analysis, is anything but as simple: a grand tour of the past 150 years of physics!
So, the absurdity of the demand for “simple” explanations is patent.
In a more serious form, the principle is the Occam razor: explanatory hypotheses or entities should not be multiplied without NECESSITY.
But, agents are obviously real as experienced and observed fact, so unless you can rule out an agent ahead of time, the inference to absurdly good luck is a manifestation of insistent question-begging, not reasonableness. As, I have explored at length earlier at no 56; adding a footnote on just this issue at point 15.
Your co-worker probably needs to look at his worldview foundations in light of first principles of right reason, maybe this discussion will help.
All this is beginning to sound like a Plato’s Cave world!
GEM of TKI
F/N: Re Carbon-copy: Methinks, dey’s been readin deir Dawkins, or at least the web[log] sites that echo him. Cf Vox Day on the New Atheists — free PPT download; his book is here at Amazon, his freebie download page seems down at the moment. G
Thanks for the encouragement. Actually, the magnet example is from the Physics article in Wikipedia. It is amazing that just one sentence after stating:
Physics aims to describe the various phenomenon that occur in nature in terms of simpler phenomena.
It continues as:
For example, the ancient Chinese observed that certain rocks (lodestone) were attracted to one another by some invisible force. This effect was later called magnetism, and was first rigorously studied in the 17th century. A little earlier than the Chinese, the ancient Greeks knew of other objects such as amber, that when rubbed with fur would cause a similar invisible attraction between the two. This was also first studied rigorously in the 17th century, and came to be called electricity. Thus, physics had come to understand two observations of nature in terms of some root cause (electricity and magnetism). However, further work in the 19th century revealed that these two forces were just two different aspects of one force – electromagnetism. This process of “unifying” forces continues today, and electromagnetism and the weak nuclear force are now considered to be two aspects of the electroweak interaction. Physics hopes to find an ultimate reason (Theory of Everything) for why nature is as it is (see section Current research below for more information).
My congratulations to the author for demonstrating how to describe a phenomenon using simpler phenomena! Perhaps it would have been better to write:
One of the aims of physics is to discover if certain different observable phenomena can be explained as the various manifestations of the same property of matter.
kf, you might appreciate this:
Functional Complexity Paper
BA: Thanks, interesting. G
KF at 59
…the notion that an explanation must be simpler than its effect is nonsense.
…so true I feel! This realisation urgently needs to be taken on-board by scientific culture. I completely concur: Simple observations may in fact be the outcome of a complex nexus of causes as we routinely see, for example, in historical and sociological explanation.
That the objects of physics, however, reverse this order by the derivation of complex patterns using “simple” algorithms seems itself to be a remarkable, inexplicable, and clever “engineering” contrivance with no apparent logical warrant.
This process of “simplification” or “data compression” in physics can’t go on indefinitely; eventually a kernel of “incompressible brute fact” is reached. The upshot is that physics itself is logically incapable of embodying its own “self-explanation” as to why its elemental recipes of description work. Its ultimate explanatory incompleteness is thus guaranteed.
Self-explanation will never come from physics. Therefore the meta-explanation of physics is, I suggest, to be found not in simplicity but in the a-priori complex – probably the infinitely complex; for me personally that unpacks to mean providence/intelligence – and that’s one of the reasons I have a faith.
Elegant explanatory power is not “simplistic.”
Einstein hinted at that: everything should be as simple as possible but not simpler than that.
“Simple” tensor, matrix or differential equation etc expressions are brief because they enfold powerful symbols. They are elegant, even algorithmic — I think here of the whole world of operators in mathematics: s, D, and the like — but are not simplistic.
F = d/dt [P] is powerful precisely because it captures some very complex operations in brief symbols.
Similarly, for dW = F*dx.
As for E = m*c^2, a whole world lurks to be unpacked therein.
Then, too, unification in powerful theories is one thing, for things that show lawlike regularities, but when it comes to things that have agent causes, that agency must be respected.
GEM of TKI
Even Dr Behe said he wouldn’t categorically deny that those systems could have evolved by Darwinian processes, but it would defy all logic, reason and evidence.
Bottom of page 203 of “Darwin’s Black Box”:
And THAT is why it is up to the evidence, data, observations and experiences- that is it has to be demonstrated tat blind, undirected chemical processes can construct functioning multi-part systems.
To date no such demonstration exists. Hot air and rhetoric will not refute IC as evidence for ID.
KF @ 66
Don’t over interpret natural language – it works on fuzzy logic
F = d/dt [P] is powerful precisely because it captures some very complex operations in brief symbols.
….exactly; just as a complex pattern is implicit in the “simple” Mandlebrot algorithm.
I’ve read through some of this, so forgive me if I’m adding something already mentioned.
A bit long I’m afraid.
One of the key elements involved in any functional, multi-part system is the problem of combinatorial dependencies – i.e. parts that depend on parts that depend on still other parts – like we see in the flagellum.
Moreover, we have in this code that depends on code that depends on code.
As soon as we introduce dependencies and more specially combinatorial dependencies (CD) we also introduce statistical mechanics.
Engineers get this. The great majority of Darwinian biologists simply don’t get it and thus bypass it as though it doesn’t exist.
Darwinists pretend CDs and statistical mechanics (SM) have no application in biology, or worse, they haven’t got a ruddy clue what CD or SM is!
There are CDs in a flagellum.
All component design specifications must meet specific physical criteria if any such motor is to work.
We have to consider for example :
-component sizes – must match with connected components
-component strength -capacity to resist external and internal forces applied to them – ex. stress from torsion, shear, pressure, heat etc.
-rotational forces and motility (ex. revs. per/s) stiffness …
-energy requirements -to move parts
-component coupling -flexibility, rigidity
-component material – too soft = galling; too hard = fatigue & eventually cracking
-component clearance tolerances between parts
This is indeed a “goldie-locks” situation. Components must to be just right or it won’t work.
The probability of having the all components set to correct specs -allegedly by RMs + NS- is small indeed. And, this is supposing that the component parts already exist; but the laws of physics & chemistry alone do not guarantee such at all.
Now add to this the algorithmic information (prescribed information -Abel, Trevor) needed to assemble the parts in the right order and you have an impossible situation in front of you.
Nature, blind and without goals or purpose, is never going to assemble a flagellum -even supposing all the component protein parts already exist in the correct locus!
The probability against this occurring by the laws of physics & chemistry + selection, are ridiculously small.
Let the Darwinists get over it and accept the obvious and properly inferred design.
Order is vital in this problem.
So, the P of just getting the parts in the correct order is about 1/42! (given 42 protein parts, assuming the flag. is made of such)
Therefore, when Darwinists place their bets on the evolution lottery machine blindly accomplishing just one simple rotary engine by chance and necessity, it is truly a “tax for the stupid”.
“Yet”, the Darwinists answers, “lottery tickets are still producing winners. Aha! Evolution could too!”
Sorry, but this is a gross misunderstanding.
If you had a single lottery wherein the gambler had to select the exact sequence of 42 numbers out of 42 numbers, it would be highly suspicious anyone ever won.
Yet evolution allegedly did this billions of times over since earths life supporting climate arrived!
Incredible credulity is required to believe such nonsense.
Around a 1 in 10^50th chance is hardly good news for Darwin.
Breaking an encrypted key of 42 bits long is no easy task even with intelligently conceived decrypting algorithms being employed on fast computers executing in gflops.
BUT! Nature isn’t even trying to produce a functional anything – i.e. it isn’t trying to win any lottery – it isn’t “buying tickets”!
It ain’t ever tried to build a rotary engine or anything else under Darwinian theory.
Amazing that Darwinists are so laughably gullible, not to mention pitifully ignorant, that they still push their idiotic theory as fact!
And all this is addressing a mere 42 part flagellum while assuming the parts already exist and are localized!
So, pretend the parts are non existent and have to be evolved themselves – all 42 or whatever it is exactly. Add localization.
Then remember all the multitude of other “must exist first” components and then do the math.
Even using wild approximations -conservative or liberal- you’ll find astronomically low P values.
You end up with virtually “impossible” written in huge letters all over Darwin’s inane ‘theory’.
In light of this IC can rightly be called Irreducible Dependencies.
If this were indeed a lottery wherein you’re betting everything you have- would you still bet for Darwin in the fat and pudgy weakling corner?
Permit me to doubt it, or to regard anyone that does as the perfect fool.
There are many facts of nature that defy all logic and reason.
As for evidence, Irreducible Complexity is not evidence against evolution. One may legitimately point to a lack of evidence for evolution. As in, ‘wow that is a truly intricate and complex system. The evidence isn’t strong enough for me to believe that this system evolved’ And that certainly is debatable. It all depends on what you consider strong enough evidence. But IC does not rule out evolution.
In Behe’s quote, he states:
In this argument against the evolution of biological systems, Behe is implicitly assuming that natural processes cannot explain human technology. However natural processes do explain technology, by explaining humans. We may think of computers as somehow distinct from objects that formed from the direct result of natural process. And in important ways they are. But that doesn’t mean that they aren’t ultimately explainable naturally. Behe’s argument is therefore circular.
Think of it this way. In certain ways nature is prone to organization. From cells to multicellular organisms to humans. Computers are just the latest example.
Yup, only, the dependencies are in a 3-D mesh.
Do you ken what Borne is saying?
GEM of TKI
It is evidence for intelligent design. Ya see we are still stuck with the fact that there isn’t any positive evidence that blind, undirected processes can construct functioning multi-part systems.
IC strongly argues against the blind watchmaker and makes a positive case for intentional design. And there does come a point in which the IC system in question does rule out the blind watchmaker.
If they did you would have a point. However that is the big question. We know natural processes cannot explain nature- natural processes only exist in nature and tehrefor cannot account for its origins, which science has determined it had.
Your problem is there isn’t any evidence that nature can produce cells from scratch. There isn’t any evidence that blind, undirected processes can do much of anything beyond breaking things- in biology anyway.
Think of it this way- you still don’t have any evidence that blind, undirected chemical processes can construct functional multi-part systems.
PS: LYO, have you ever had to design a complex, multi-part functional hardware based system that had a lot of fiddly interfaces, and get it to work? What was the experience like?
What about large programs? With time, they become incredibly complex. What was once built by using various independent modules becomes more and more complicated, with various parts becoming dependant on each others, even if it was not intended. Although every module should theoretically be independent, a time come when it is harder and harder to make a simple chance without have to recode many parts of the program. Until the program gets so complicated, it’s unmanageable.
Various strategy can be used to to slow the process, but large programs will always tend to grow more complicated with time since it has to adapt to a changing environment (new OS, competition from other programs, client needs, etc…).
So shouldn’t the question be: is irreducible complexity even avoidable in a system evolving in a frequently changing environment?
Not by chance, nor necessity, but by agency intervention.
So the question should be is if every time we observe IC systems and know the cause it has always been via agency involvement can we infer agency involvement when we observe IC and don’t (directly) know the cause once we have eliminated chance and necessity?
But in the case of a large program, it’s happening against the will of the programmers. It’s a natural tendency toward complexity that has to be fought against in order to keep the code manageable.
Interesting comment. Complex programs, in our observation, are invariably designed, using symbols and rules of meaning. That is they are manifestations of language. Which is, again in our observation, a strong sign of agency, intelligence and art at work.
Going beyond, as systems are designed and developed, we are dealing with the intelligently directed “evolution” of technology. That is, we see here how an “evolution” that is happening right before our eyes is intelligently directed and is replete with signs of that intelligence such as functionally specific, complex organisation and information, as well as, often irreducible complexity [there are core parts that are each necessary for and jointly sufficient to achieve basic function of the whole].
What is our experience and observation of complex language based entities such as programs emerging by chance and mechanical necessity? Nil.
What is the analysis that shows how likely that is? It tells us that in a very large sea of possible configs, islands of function will be deeply isolated and beyond the search resources of he observed cosmos on blind chance + necessity.
Now, you suggest that complex programs/applications “naturally” tend to become ever more complex.
But actually, what you are seeing is that the expectations of customers and management get ever more stringent; including the pressure that the “new and improved” is what often drives the first surge of sales that may be the best hope for a profit. So, by the demands of human — intelligent — nature, competition creates a pressure for ever more features and performance.
If one is satisfied with “good enough” one can get away with a lot: I still use my MS Office 97, and my more modern office suite is the Go OO fork of Open Office. Works quite well for most purposes, and keeps that ribbon interface at bay.
Feature bloat is not the same as progress.
GEM of TKI
I think you would do well to address the points Joseph has raised.
Also, in so doing, please bear in mind that “evolution” in the sense of change across time of populations, is not the same as Darwinian macro-evolutionary theory. There is also no observed evidence that empirically grounds the claim that accumulation of small changes due to chance variation and differential reproductive success is adequate to explain the origin of major body plans form one or a few unicellular common ancestors.
The required FSCO/I, across time, on our observation of the known and credible source of such information, is intelligence.
In the case of irreducibly complex systems, the issues C1 – 5 in the OP above point strongly to the need for foresighted creation of parts, sub assemblies, and intelligent organisation of same to achieve successful innovations.
This not only covers the existence of body plans, but the origin of life itself, the very first body plan, including the issue of the capacity of self replication, a condition for any evolving by descent with modification.
Observe as well that the strong evidence is that adaptations, overwhelmingly, are by breaking the existing information in life forms, where it is adaptive to do so. There is no observational evidence of the spontaneous origin of novel functional bio-information involving blind chance and mechanical necessity based spontaneous creation of novel bio-information of the order of 500 or more base pairs.
GEM of TKI
And yet programmers are doing it.
What natural tendency towards complexity?
It seems that even one component per function could be “irreducible”. Please check post 47 .
They are definitely not doing this on purpose. No programmer is saying to himself: I have to make this code as complex as possible. That’s quite the opposite! There are many books aimed to teach programmers to make code as simple as possible to make maintenance much easier.
What natural tendency towards complexity? It’s entropy. Code won’t start cleaning itself spontaneously, company have to invest a lot of money and energy in order to keep code manageable.
Pardon, missed your post in the rush, since you are a newbie, you will be in mod at first.
Re 47 — yup, even one component can have parts to it or sections or shapes etc that are functionally critical and information-rich, including stuff like the nonlinear behaviour of a bit of wire! (I have linked Behe’s rebuttal on the mousetrap above now, too.)
GEM of TKI
PS: IC, BTW is not an “anti-evo” point but a pro-design point. Behe believes in common descent but identifies design as a key input to any significant evo. Even many modern YEC’s similarly hold with a rapid adaptation to fit niches, up to about the family level.
The tendency to complexity is a reflection of the demands on programmers, driven in the end by the tech environment. The nature in question is HUMAN.
GEM of TKI
I agree with you that the addition of new modules to a program is driven by human needs. But this is not what I mean when I say that the complexity of a program will increase over time.
What I’m talking about is large programs (several hundred thousand lines of code) developed by large group of programmer. There will always be flaws in those kinds of programs that will need to be addressed during maintenance. The problem is that when a change is made in a specific module, others changes have to be made in other parts of the program that are using this module.
Over time, the situation tend to get worst, and even making what would seem to be a simple change require hours and hours of programming to cope with sides effects in other parts of the program. Eventually, the program will get so complex that it can’t even be maintained anymore.
The need to add new features to a program is driven by competition in the market, but the increase in complexity is simple thermodynamics.
I see and take your focal point: maintenance not bloat. (My opinions on the latter are no secret — I really don’t like the ribbon and software with annoying features that cannot be turned off in any reasonably discoverable fashion. As for “personalisations” that end up mystifying anyone who needs to come in on short notice . . . )
I am not so sure, though, that thermodynamics is the right term, though I understand the issue of embrittlement due to implicit couplings and odd, unanticipated interactions. (I appreciate your point on deterioration due to embrittlement, thence eventual loss of function.)
And I see your use of “natural” in the sense of an emergent trend that seems inexorable, once we face the inevitability of some errors and the implications of subtle interactions. I think though that that is distinct from: that which traces to spontaneous consequences of chance + mechanical necessity without need for directed contingency.
maybe we need to mark distinct senses of terms here.
Significant points . . .
GEM of TKI
The point of my intervention was simply to try to demonstrate that complexity of a system evolving in a changing environment will tend to increase unless energy is invested to avoid it. I also wanted to show that an increase in the complexity of a system is not necessarily a good thing.
I hope I’ve been able to explain this correctly through my last posts. I definitely don’t think it will solve the debate on irreducible complexity and evolution, but I do think it’s an aspect of the problem that should not be ignored.
As a conclusion, I’d like to come back quickly on your point about loss-of-function. When a program is tested correctly, there is no reason for loss-of-function to happen, even if there is an increase in complexity. Most changes are usually transparent to the user (since I bought my last OS, there have been many patches, but the vast majority of them didn’t change the way I interact with it).
There is of course a loss in term of efficiency as the program gets larger and more complex.
Okay, though of course the “evolution” as observed is most definitely intelligently — not to be equated with “wisely” — directed.
The design view issue is, where do functionally specific complex organisation and associated information come from?
A: As this case again illustrates, consistently, on observation: from intelligence, as the islands of function in large config spaces analysis points. And, the software embrittlement challenge you highlighted shows that design does not necessarily have to be perfect, to be recognised — from its signs — as design. It also highlights how robustness with adequate performance may be a sound solution, warts and all. [Office 97 still works adequately, thank you. So does Win XP. And, OO is doing fine on far less space than MS Office.]
GEM of TKI
PS: On loss of function, I am thinking not only on efficiency but on becoming crochety, unstable and buggy. BTW, I once had a PC with a Win ME install that worked very well, and stably.
With regard to the maintenance issues you describe. Those issues are only applicable to poorly designed software. Good software features a high degree of encapsulation i.e implementation details of the objects or modules making up the design are hidden and not relevant to other parts of the software interacting with it. The interfaces are what is important.
The biggest difficulty in software is getting the design or architecture correct given the assumptions. And often over time the assumptions change.
For me the fact that so much in life is shared at the cellular level is a strong indication of design. It speaks to an architecture in life which has been able to withstand and accommodate so much variety in life over so long a period.
Is there any really good complex software — by that standard — out there?
What is it like to maintain it?
Just curious . . .
As in: are we exchanging one difficulty for another? I.e. is there a reasonable basis for the messy code that seems to be ever so common out there? G
borne @ 73 “…or to regard anyone that does as the perfect fool.”
Indeed. The problem that I see over and over is that it is literally impossible to reason with fools. (How to reason with someone who rejects the epistemological authority of reason?) Particularly when they think they are the ones being rational. It would be hysterically funny if the consequences of foolishness were not eternal. But they are, thus it is infinitely tragic.
lastyear @ 74 “There are many facts of nature that defy all logic and reason.”
I’m having a difficult time thinking of any. Maybe you could help me out. Share some of these facts with me. Thanks.
Re, LYO @ 74: Behe is implicitly assuming that natural processes cannot explain human technology. However natural processes do explain technology, by explaining humans . . . Behe’s argument is therefore circular. Think of it this way. In certain ways nature is prone to organization. From cells to multicellular organisms to humans. Computers are just the latest example.
Ironically, this is a case of making a circular assumption, then projecting it to those whose argument one objects to.
If we go back to the first two articles in the ID foundations series — here and here, we will see that there is excellent reason to distinguish the credible capabilities of nature [here, chance + necessity] and art [i.e. intelligence . . . an OBSERVED entity in our world, we need not smuggle in assumptions about its nature or roots, just start with that basic fact, it is real].
Namely, as complex, functionally specific organised entities are in deeply isolated islands of function in large config spaces, undirected chance plus mechanical necessity will on random walks from arbitrary initial points, scan so low a proportion of the configs that there is no good reason to expect them to ever land on such an island, on the gamut of the observed cosmos. This is the same basic reasoning as undergirds the second law of thermodynamics on why spontaneous trends go towards higher probability, higher entropy clusters of microstates: i.e. more and more random distributions of mass and energy at micro-levels.
But, we routinely see intelligence creating things that exhibit such FSCO/I. So, we have good — observational and analytical — reason to recognise such FSCO/I as a distinct characteristic of intelligence.
What LYO is trying is to say that once we ASSERT OR ASSUME that nature is a closed, evolutionary materialistic system, items of art such as computers “ultimately” trace to chance + necessity that somehow threw up humans. But we do not have a good right to such an assumption.
Instead, we need to start from the world as we see it, where nature and art are distinct and distinguishable on their characteristic consequences and signs.
When we do so, we see that the sign is evidence of the signified causal factor. On the strength of that, we then see that life shows FSCO/I and is credibly designed. Major body plans show FSCO/I and are credibly designed, and finally the cosmos as we see it shows that the physics to set it up is finely tuned so that a complex pattern of factrors sets up a cosmos that supports such C-Chemistry intelligent life as we experience.
So, once we refuse to beg worldview level questions at the outset of doing origins science, we see that a design view of origins is credible, and credibly a better explanation than the blind chance + necessity in a strictly material world view.
It should not be censored out or expelled, on pain of destructive ideologisation of science. Which, unfortunately, is precisely what is happening, as — pardon the directness — the materialistic establishment seems to value their agenda over open-mindedness.
GEM of TKI
In this argument against the evolution of biological systems, Behe is implicitly assuming that natural processes cannot explain human technology. However natural processes do explain technology, by explaining humans. We may think of computers as somehow distinct from objects that formed from the direct result of natural process. And in important ways they are. But that doesn’t mean that they aren’t ultimately explainable naturally. Behe’s argument is therefore circular.
this is really a pearl! So, you are saying that Behe’s argument is circular because, as everybody knows, “natural processes do explain technology, by explaining humans”.
And, I suppose, “natural processes do explain technology, by explaining humans” because Behe’s argument is circular, like probably all ID arguments.
I think I have found another point we could add to those brilliantly listed by Haack (see the pertinent thread) to detect scientism:
“Imagining circularity in others’ arguments where it is not present, and supposrting that statement by truly circular arguments”.
Did you say Natural Processes explain humans? I sure would like to see your evidence (any evidence besides ‘just so stories) for the ‘natural’ explanation for some of these characteristics of humans.
The Amazing Human Body – Fearfully and Wonderfully Made – video
Yes. There is a lot of messy code out there. I have written some of it. Quite embarrassed to look at some of the stuff I have done. Although I am not a full time programmer.
Before I say anything else let me say code that is maintainable is inevitably well planned. It also normally has gone through a few design iterations before the problem domain is sufficiently understood for the design to be good. Which is another reason I find it hard to believe any unplanned system could be so efficient and stood up over so many years.
I think the following are examples of excellent software:-
– Qt 4
– Symfony 2.0
None of those are applications themselves. They are application frameworks, toolkits, etc but the same problems apply.
Maitenance programming is not the problem with software if it is well designed. By that I mean it has the following characteristics:-
5. Is DRY “Don’t repeat yourself’
6. Loosely coupled – normally anyway
I am sure there are more. But that comes to mind.
For example say my application opens images in various places. Given that I am only supporting one OS, or environment I could just call the functions needed to perform the actions I want on the image inline. But that would lead to a maintenance nightmare because I would be duplicating a lot of code and I would be violating my layering and encapsuation rules. So instead I create a class which contains methods which will perform the tasks I am going to be using over and over again.
Now suppose I want to support a number of platforms and any number of future ones. I must sacrifice some speed so that I can better maintain my code in future. So I will create a further level of abstraction. I create a standard interface e.g. open, rotate, resize, save, etc to my image as before. But when I when I use the open function I use a factory class to determine which libraries I am going to use to manipulate my image. The factory will check which platform is being used, what type of image it is and return an object that implements my standard interface specific to the platform. In this way I have future proofed my image handling. I can add functionality, change backends, etc to my image handling within the application without any other part of the application needing to know how it is being done.
I think security is the biggest issue in software development. You may be interested to read about a research operating system by Microsoft which moves operating systems into the 21st century. Its called singularity. It has the following characeristics which are quite revolutionary.
1. No shared memory. A program cannot access memory which does not belong to it.
2. No context switching. All programs run in the highest priviledge space but are prevented from doing anything malicious because they all run in a sandbox – they can’t do anything outside of there space. They can send a message to a message broker in order to communicate.
3. All programs are object code i.e. they are not raw (compiled) instructions. They are checked before hand to make sure they can’t do anything malicious.
4. Programs cannot modify their own code.
Buffer overflows, bad drivers and things like that are not possible in this operating system. It certainly is highly fault tolerant.
Read about it here. http://research.microsoft.com/.....estack.pdf
The homepage is here. http://research.microsoft.com/.....ngularity/
P.S. Some code that is hard to maintain is there because there has been no incentive to create easily maintainable code. I mean I get paid to develop x and y functionality. I then get paid to perform a and b maintenance activities. The effort involved to do a and b is inversely related to the effort for x and y. But the cost of a and b is not questioned, but the cost of x and y is questioned. So I short cut x and y and defer the effort to a and b.
So I suppose it is a management and incentive problem.
And there actually is some proper code out there!