Irreducible Complexity

Computers vs. Darwinism? A computer teacher comments

Spread the love

Recently, I have been reading Angus Menuge on the failure of Darwinism – from a computer teacher’s perspective. Menuge is a Professor of philosophy and computer science at Concordia University in Wisconsin. The following excerpt from his book, Agents under Fire is the clearest explanation I have read so far of why the Darwinist argument that intricate machines inside the cell can be built up without any intelligence underlying the universe are unbelievable:

A Diagnosis of the Failure of Darwinism

Repeatedly, we have seen that even if gene duplication can make all the parts of an irreducibly complex system simultaneously available, Darwinism cannot provide credible solutions to the problems of coordinating these parts and ensuring their interface compatibility.

From my perspective as a teacher of computer programming, this limitation of Darwinism as a problem-solving strategy is surprising. First, consider the analogous problem of coordinating a program’s instructions. As programs become more complex, it becomes virtually impossible to get them to work if they are written from the bottom-up, one instruction at a time.
 
With so many details, it is highly likely that some critical task is specified incompletely or in the wrong order. To avoid such errors, programmers find it essential to use top-down design. Top-down design is a problem-solving strategy that begins with an abstract specification of the program task and then breaks it down into several main sub-problems, each of which is refined further into its subproblems. This strategy is epitomized by such things as recipes, where the task is broken down into ingredients and utensils (initialization), and the mixing and cooking of the ingredients (processing), and a specification of what to do when the dish is ready (finalization). The same approach is clear in the instructions to build “partially assembled” furniture, such as a bookcase.
 
First, the assembly of the bookcase is reduced to its major tasks, constructing the frame, back, and shelves. Then each of these tasks is specified in detail. At every level, the order of the tasks is important; for example, the back and the shelves cannot be installed until the frame is complete. A quality top-down design is sensitive to the proper placement of tasks, ensuring that given task is not omitted, redundantly repeated, or performed out of sequence. In this way, top-down design facilitates the proper coordination of problem-solving modules.
 
Unfortunately, natural selection cannot implement top-down design. Natural selection is a bottom-up atomistic process. Tasks must be solved gradually, independent from one another. There is no awareness of the future function of the assembled system to coordinate these tasks. If even intelligent agents (experienced programmers) require top-down design to solve complex problems, it is tendentious to suppose that unintelligent selection can solve problems at least as complex without the aid of top-down design.
 
In fact, even with top-down design, programmers find that it is necessary top do two levels of testing to produce a functional program. One level, unit testing, tests the function of a module in isolation from the whole program. The other level, integration testing, ensures that when all the modules are assembled, they interact in such a way as to solve the overall problem .Both kinds of testing are needed: it is a fallacy of composition to argue that since all the part of a system work, the assembled system will also work.
 
Compare the following examples.
 
Each football player is fit; therefore the team will play effectively.
 
Each brick is sound; therefore, the resulting wall will be strong.
 
The conclusions do not follow because it matters how bricks and players are coordinated, and it matters whether they are compatible. Say that each player is fit but that the offense tries to score only when it has lost possession: the team will be hopelessly uncoordinated. And if each player has a different play for the same circumstance, the team will suffer from incompatible elements.
 
Likewise, if bricks are sound but are piled at random or are incompatible in size and shape, it will be impossible to build an effective wall.
 
Unfortunately, Darwinism commits precisely this fallacy of composition in the case of irreducibly complex systems. It has to suppose that the independent unit testing of atomic components (which natural selection provides) is a plausible way of coordinating and attuning those components for their combined role. But it is not. The majority of subsets drawn from the power sets of sound football players and bricks will be completely dysfunctional when combined as teams or walls.
 
Note 65: From another perspective, Darwinism is also guilty of the reverse fallacy, the fallacy of division. It argues that because a given “irreducibly complex” system has a function, it therefore must be composed of subsystems with the same or a different function. But by itself the flagellum’s motor neither supports locomotion or any other function.
(pp. 120-21, Agents under FireYou won’t read that in your government-funded textbook, so save this link. 

 

More on Angus Menuge (fingered as part of this evil conspiracy):

New Scientist conspiracy files: A philosophy prof responds

Dorian Gray, I hope you believe in miracles, because …

New Scientist: More from the “just connect the dots and … ” files

Scare their pants off before they even start reading – the art of the panic headline

Also at the Post-Darwinist

Remember one gene codes for one protein? You ARE your genes? And all that? (Good. Now exercise your brain by forgetting it.)

Science and popular culture: You as a billboard for current science ideas

Fairness? What’s fairness?

Stuff that should be a joke, but Brit toffs are fronting it, so …

Anthropology: Darwinists vs. humanists

35 Replies to “Computers vs. Darwinism? A computer teacher comments

  1. 1
    GilDodgen says:

    Angus’s arguments and logic are so obviously true, and trivially easy to understand. Undirected co-option is a complete fantasy, supported by neither evidence nor logic (although supported plenty by overactive imaginations). But Judge Jones says Behe ignores co-option as an explanation for irreducibly complex biological systems, so I guess that settles it.

  2. 2
    RoyK says:

    Hi Gil,

    I posted a comment in another forum. It seems relevant to your use of the phrase “obviously true,” so I’ll repeat part of it here:

    Things that we feel to be obvious aren’t usually arrived at by reason and experiment. Things that are obvious don’t need defending. What we feel to be “obvious” is usually the result of ideology.

    Interestingly, nobody responded directly to my earlier comment. Perhaps it was obviously false?

  3. 3
    lukaszk says:

    Everyone who created bigger program than “hello world” (ex. me) knows that it is imposible to do it by random.

    The question is “Does life looks like computer simulation?”. I am a programmer, and the longer I live, the more I think the God is a Great Programmer.

  4. 4
    Collin says:

    RoyK,

    I am in law school and whenever somebody says “clearly” we know that they are just trying to lend weight to their argument.

    However, aren’t there things that are obviously true that people try to obscure because it defies their ideology? In other words, say Darwinism is built up for year and year and then something simple and clear seems to refute it; wouldn’t people who have given a lot of time and commitment to Darwinism, look for a way to twist what is obvious into that which is not obvious? Human beings do have the ability to turn that which is obvious into obscurity.

  5. 5
    RoyK says:

    In my experience, words such as “clearly” and “obviously” are usually bluster.

    I don’t find much clear here. Part of the fault may lie in Denyse’s introductory comment. Is there anything clear about this?

    why the Darwinist argument that intricate machines inside the cell can be built up without any intelligence underlying the universe are unbelievable

    Holy cow! A clear explanation requires a clearly stated problem.

    But the argument itself is only relatively clear, as it relies wholly on analogy with Menuge’s “perspective as a teacher of computer programming.” Arguments by analogy — extended analogy, in this case — are clear enough to someone who shares the “perspective” relied on. Thus Gil, a computer dude who is already committed to ID, finds a computer programming analogy for the failure of Darwinism “obvious.”

    Well, blow me down! Talk about preaching to the choir.

  6. 6
    Patrick says:

    Cut the semantics arguments–which distract from the topic–and face the real problem at hand.

  7. 7
    mynym says:

    Interestingly, nobody responded directly to my earlier comment. Perhaps it was obviously false?

    Or perhaps whenever anyone says “interestingly” they’re actually being pedantic and boring. Although it seems obvious that no one can know for sure.

  8. 8
    RoyK says:

    Touché.

  9. 9
    ribczynski says:

    Angus Menuge’s striking resemblance to Charles Garner cannot be due to mere chance.

    I infer design.

  10. 10
    mynym says:

    Arguments by analogy — extended analogy, in this case…

    Why do you suppose that the genetic code is called a code?

  11. 11
    lukaszk says:

    “Why do you suppose that the genetic code is called a code?”

    By chance ;P

  12. 12
    Borne says:

    I like what Menuge is saying but I find a couple of possible problems with regards to the computer analogy.

    Top-down design isn’t the only way to go. Bottom-up is actually pretty popular these days since the development of Object Oriented software development, Aspect Oriented dev. and other such recent technologies.

    I also notice that he defines only 2 levels of testing – unit and integration. But there is also functional testing between those 2.

    Functional testing puts the “unit” into the main program (or a prototype) to test whether it works within the application. Integration is at the level of the system as a whole.

    His points still work though I think it could be important to mention this if only to preempt the obvious Darwinist objections.

    The bottom-up designs in software work very well. But they are still not at all like rm + ns. They still require a stated goal, function or need. No one makes program Objects that do nothing. Even some root classes (objects) may appear to be little more than shells, polymorphism is key and the “child” objects (or classes) add implementation details and function while retaining the base class’ identity and base functions and parameter signature(s).

    All this must be planned and built using strict rules. It’s very versatile but a programmer or architect still has to plan for it all.

    Nature can’t plan ahead. It doesn’t work by any other rules than the laws of physics and chemistry. It has no goals or function in mind since it has no mind. Yet molecular machines (there are many hundreds of them in any cell – yeast cells for ex contain more than 250 nano-machines see here)

    These machines are often constructed out of ‘Objects’ (proteins) already extant in the cell. The catch is that they don’t just pile up together in a random way and somehow, once in million tries, end up making something that actually does useful work in the cell. There are instructions that are coded that detail the assembly plans. These instructions, cannot be accounted for in any material only way.

    Look in a factory and you may see a ton of parts scattered around in a bin. But look more and you see many of those same parts being placed together with other parts. Then if you look a little further on you will see a lot of various parts all stuck together in the form of say an automobile or a jet fighter or…

    The key is that the instructions (code) that put the parts (objects or classes) together into a functional, useful ‘application’ cannot be arrived at either by the physical properties of the parts themselves or by accidents or random movements of the parts.

    Same goes for the molecular machinery of the cell.

    A Darwinist might respond, “But you’re comparing living, mutating organisms with inanimate parts”. Doesn’t matter. Assembly instructions my friend, instructions!

    And, are proteins themselves “alive”? Or, do they have little pico-sized brains telling them where to move and when? No.

    Thus materialism cannot explain life.

    Conclusion: “Life made DNA, DNA did not make life.” -Barry Commoner
    Sounds like ID to me.

  13. 13
    GilDodgen says:

    RoyK: Things that we feel to be obvious aren’t usually arrived at by reason and experiment.

    Many of them are. See my UD essay here.

    lukaszk: Everyone who created bigger program than “hello world” (ex. me) knows that it is imposible to do it by random.

    Even a Hello World program is beyond the reach of chance and selection. See my UD essay here.

  14. 14
    Borne says:

    I know my previous post was short (with a couple of typos) but hopefully to the point. Perhaps Gil or DaveScot or someone else with expert IT knowledge might add something to that?

  15. 15
    lukaszk says:

    “Even a Hello World program is beyond the reach of chance and selection. See my UD essay here.”

    I know people whos programming looks like chance-driven 😉 so I am careful 😉

    Other problem with software, or firmware which is called instinct too. Problem with eye – again. Hardware and firmware always is created simultaneously. So.. let imagine that darwinians are right. And the first animal with primitive eye is born. And what? How this poor animal may know how to use impulses he/she/it gains from environment for the first time in history. In my opinion: he/she/it would die before he/she/it tumble what’s goin’ on – eye without proper instinct does not increase chance of survive.

  16. 16
    ribczynski says:

    Menuge repeats several common IDer mistakes:

    1. Despite acknowledging that “[In NDE] There is no awareness of the future function of the assembled system to coordinate these tasks”, he makes the mistake of arguing as if there were a fixed target, contending that the probability of hitting that target is unreasonably low.

    Yet there is no preordained goal. NDE retains variations that improve reproductive success, whether or not they move the genome in a particular direction.

    The probability of NDE’s success is the probability of hitting any adaptive target.

    2. Menuge states that “Tasks must be solved gradually, independent from one another”, but then proceeds to argue as if the system were a combinatorial object, requiring many simultaneous, coordinated changes.

    3. He treats the organism as a von Neumann machine. It is not.

    Menuge writes:

    It [NDE] argues that because a given “irreducibly complex” system has a function, it therefore must be composed of subsystems with the same or a different function. But by itself the flagellum’s motor neither supports locomotion or any other function.

    There are three more errors in that short passage:

    4. Contra Menuge, a system is by definition not irreducibly complex if it contains a subsystem having the same function.

    5. IC systems can be formed by the elimination of parts, in which case it is not true that they must be composed of fully functional subsystems.

    6. Even if a system were composed of fully functional subsystems, it would not follow that an arbitrary excision (such as Menuge’s example of removing the whip of the flagellum and leaving the motor) would result in a viable organism.

    Evolutionary biologists only claim that if you took the changes that led to an IC system and reversed them, step by step, then the organism would be viable at each step within the environment in which that step took place.

  17. 17
    Collin says:

    Ribz,

    If an IC system can be formed by the elimination of parts, wouldn’t the pre-system system be IC? If not, why not? If so, who designed the designer? Wait, wait, … What I meant to say is infinite regress?

    RoyK
    Despite the problems with “obviously” and “clearly”-type words, there are things that every scientist treats as obvious without challenging it on intellectual grounds. These things are assumptions about the world that are necessary and proper (but not scientifically proven or provable). For example: Ockham’s razor is not proven or provable, but it just seems reasonable. It is “clearly” or “self evidently” true.

  18. 18
    Borne says:

    ribczynski: Man, where does one start!?

    1. Despite acknowledging that … he makes the mistake of arguing as if there were a fixed target, contending that the probability of hitting that target is unreasonably low.

    The probability of NDE’s success is the probability of hitting any adaptive target.

    The probability of obtaining any complex functional machine through chance and selection is astronomically low.

    It is not a mere question of sticking randomly generated parts together. Order of assembly, strength compatibility, size, mass, etc. are vital.

    The order of assembly of the parts of any machine – even with only a few parts (like the mouse trap) – is key to it’s function.

    NDE does not and cannot account for the existence of step by step assembly instructions in DNA.

    The flagellum, for ex., has around 42 parts. The assembly of a flagellum from those parts must be sequenced correctly or there is no flagellum!

    Just like a computer programs instructions. You simply cannot code a ‘Else’ statement, for ex, before a corresponding ‘If’ statement. In C++ for ex, you can’t just put the closing } before the opening { – it won’t compile let alone function.

    Hundreds of examples could be cited here.

    Your point here is just a variation of the monkeys with typewriters baloney.

    2. … argue as if the system were a combinatorial object, requiring many simultaneous, coordinated changes.

    Yes, and he’s right. Combinatorial dependencies are exactly what any multi-part machine implies.

    3. He treats the organism as a von Neumann machine. It is not.

    Indeed! It is far more sophisticated but still generally behaves like a machine-making factory.

    There are three more errors in that short passage:
    4. …a system is by definition not irreducibly complex if it contains a subsystem having the same function.

    Unless you can demonstrate that the sub-sys is not itself, IC.

    5. IC systems can be formed by the elimination of parts, in which case it is not true that they must be composed of fully functional subsystems.

    Well that represents devolution though doesn’t it!

    Evolutionary biologists only claim that if you took the changes that led to an IC system and reversed them, step by step, then the organism would be viable at each step within the environment in which that step took place.

    I don’think you quite get IC.
    Behe defines IC as:

    A single system which is composed of several interacting parts that contribute to the basic function, and where the removal of any one of the parts causes the system to effectively cease functioning”.

    A flagellum, with around 40 parts, that accomplishes a very specific function – motility – ceases to allow motility if you remove for ex. the rotor or the proton power coupler. I would call that ceasing to function.

    Now here is what one mechanical engineer had to say on this:

    Let’s take a look at this in a little detail. First we have a passive pore that starts things off. Since this is the base of the eventual flagellum one has to ask is the pore the right size that the whip of the flagellum can provide the locomotion we see? If it is too small the resulting whip will not be able to handle the stresses from torsion and coupling. If it is too big the whip will be too bulky to be driven in any effective way by the motor. Then we add the secretion system. Is the pore the right size and of the right protein type for the existing secretion system? If not there will be no coupling of the two and no progress.

    Ok now we have a selective pore and an secretion system but does it secrete proteins that will be right for the whip? The whip has to have the right protein shape. In engineering the components of a flexible whip have to be designed to mesh correctly such that there is just the right combination of coupling, flexibility, and rigidity. They also have to be the right material. If they are too soft there will be galling. If they are too hard fatigue cracks will set in and destroy the whip. The same goes for clearances between parts. This is a goldie-locks situation. Things have to be just right or it won’t work.

    Next we have to add the motor. Let’s assume we’re very lucky that a motor will fit and couple with what we have so far. However, the motor has to have the rpm and torque to drive the whip just right. If it doesn’t have enough torque we won’t get what we see. If the rpm is too fast the whip will destroy itself because of the hydrodynamic forces applied to it by the fluid. Then it and all the other components have to be sized just right to reverse or the torsional forces on the wip will rip it apart. Remember the diameter, materials, meshing of parts, etc. in this Darwinian scenario have no idea what will be required later. – steve petermann

    Can you see why principles of statistical mechanics remove all probability of a flagellum (or any other such nano-machine) from arising by pure luck + selection? I hope so.

  19. 19
    ribczynski says:

    Collin asks:

    If an IC system can be formed by the elimination of parts, wouldn’t the pre-system system be IC?

    No. If the removal of a part doesn’t nullify the function, then the system isn’t irreducibly complex according to Behe’s definition.

  20. 20
    Norman Doering says:

    Are search algorithms not irreducibly complex?

    I ask because Danny Hillis evolved some on his computers back in the 80s.

    You can read about it here:
    http://www.kk.org/outofcontrol/ch15-d.html

  21. 21
    ribczynski says:

    Borne,

    I can’t find any points in your comment that aren’t already addressed by my previous comment.

    Please read it again.

  22. 22
    Patrick says:

    I thought I’d chip in some data points to help prevent retreading of old ground (there’s a rut there now, seriously):

    2006 Nature Flagellum Sequence Comparisons

    The flagellum consists of 42 proteins. 23 proteins are “thought to be” indispensable in modern flagella. Out of those 23, 2 are unique. Otherwise 15 other proteins are unique. So that’s 17 unique proteins with no known homologs (it used to be believed that there were 30, and this figure was quoted by major ID proponents in the past). So in the last couple years 13 additional homologs have been found.

    But before accepting those numbers note the sequence similarities. 14 of these homologs were found by BLASTing on non-default settings according to Matzke. Whether that should be considered acceptable I can’t say. So perhaps it’s debatable exactly how homologous/unique some of these proteins truly are (never mind Behe’s work on protein binding sites). But despite any bias in determining homology citing “30 unique proteins” is likely no longer correct.

    That’s from 2006 (I would appreciate any recent data).

    More recently:

    All Flagellar Genes Derive from a Single Gene

    A paper published in the Proceedings of the National Academy of Sciences makes the startling claim that all flagellar genes “originated through the successive duplication and modification of a few, or perhaps even a single, precursor gene” (see abstract below). While consistent with Darwinian evolution, such excessive hyperevolution was too much even for the hyperevolutionists at the Panda’s Thumb (go here), who are now distancing themselves from its conclusion. What’s going on here? How could people publish such a ridiculous result, and in PNAS of all places? Let me suggest the following hypothesis: Liu and Ochman, the authors of the piece, are really ID advocates who are pulling a Sokal-style hoax, pushing the envelope to see how extreme they can make their claims for evolution and still get them published.

    References:

    Behe’s take: Darwinism Gone Wild

    Stepwise formation of the bacterial flagellar system

    NewsScientist Overview

    Summarized by PaV:

    I’ve given the paper a brief look. I don’t see what the problem is as far as methodology goes. The authors cite a paper by Pelligrini, et.al., which, it appears, is the foundational paper on a technique called Phylogenetic Profiling. And, apparently, the BLAST program does such profiling. The Pelligrini paper dates from 1998. That’s almost ten years ago. Why didn’t anybody object to this methodology before? I think the problem is that Matzke realizes that what the authors have done is given evolutionary skeptics a tool. The paper suggests that there are about 24 “core” genes that are handed down from the Common Ancestor to all bacterial phyla. This seems to imply that somehow, given that bacteria represent the “first life” on this planet, that 24 genes somehow had to “evolve” before bacteria arose. Well, what form of life does this Common Ancestor represent? And why haven’t we encountered it before in the fossil record? And even if such a CA should be encountered, using the “co-option” method of Matzke and PZMyers, this means that, miraculously, 24 simultaneous “co-options” (or else you have to come up with some selective advantage for each one of the ‘co-options’) took place in a Common Ancestor that we don’t even know existed.

    It’s no wonder Matzke wants to say this is bad science. It’s not bad science; it’s science that’s bad for Darwinism!

    rib,

    If the removal of a part doesn’t nullify the function, then the system isn’t irreducibly complex according to Behe’s definition.

    Actually, this putative “pre-system” could be IC in itself if it has a function different apart from the super-system. Also, a system can include parts that are not necessary for the primary functionality but increase efficiency (and these “extra” parts might be IC in themselves). Now if a part that is included in the IC core is removed that should cause a loss of system functionality, which you did get right.

    But I’ll let you (re)hash that out with Borne.

  23. 23
    ribczynski says:

    Patrick wrote:

    Actually, this putative “pre-system” could be IC in itself if it has a function different apart from the super-system.

    Yes, and the point you raise — that irreducible complexity can only be established with respect to a single, given function — is what makes IC, as a concept, fall so far short of the hype.

    Consider a system S that according to ID supporters could not have arisen via a Darwinian process. Suppose we determine that S is irreducibly complex. What does this tell us about S that we didn’t already know? Only that it could not have evolved in stepwise fashion from its subsystems while retaining the same function.

    Does it eliminate the possibility that the subsystems of S had a different function or functions? No.

    Does it eliminate the possibility that S evolved with the help of a “scaffold” that was then eliminated? No.

    Does it eliminate the possibility that S evolved from a larger, non-IC system with the same function through the elimination of redundant parts? No.

    Does it eliminate the possibility that S evolved from a larger system having a different function? No.

    Then why the hoopla?

  24. 24
    Borne says:

    rib

    I can’t find any points in your comment that aren’t already addressed by my previous comment.

    To me that means you didn’t understand any of it. What part of statistical mechanics, combinatorial dependencies, structural requirements and coded instructions don’t you get?

    As Patrick states, in the core of any machine, removing key elements in the core means destroying the functionality of the whole.

    If you were to delete file kernel32.dll (or any other essential core file) from a Windows OS you would crash Windows – and that irreversibly so – until that core file were put back, Windows would not even start.

    If you remove a hub cap from a car the car will still function perfectly, but if you remove the engine block …

    Your arguments are insufficient for dealing with IC.

    As I noted before, there are over 250 nano machines in yeast cells alone.

    These things are often inter-operative – they will ‘cooperate’ with each other and often work synchronously or sequentially – just like in a real automated factory.

    You can go on and on about the virtues of undirected chance in cellular incidents as much as you like; however, there isn’t enough time in the history of the earth (perhaps the universe) to bring so much utilitarian machinery into being by rm + ns. Again the principles of statistical mechanics are against it ever happening.

    Which is no doubt why Crick and Hoyle et al. came up with the panspermia hypothesis in the first place.

    “The notion that not only the biopolymer but the operating program of a living cell could be arrived at by chance in a primordial organic soup here on the Earth is evidently nonsense of a high order.”

    Sir F. Hoyle

    DNA with its vast coded information system (far surpassing anything men have devised or conceived) has been around for a very very long time.

    Yockey rigorously demonstrates that the coding process in DNA is identical to the coding process and mathematical definitions used in Electrical Engineering. This is not subjective, it is not debatable or even controversial. It is a brute fact:

    “Information, transcription, translation, code, redundancy, synonymous, messenger, editing, and proofreading are all appropriate terms in biology. They take their meaning from information theory (Shannon, 1948) and are not synonyms, metaphors, or analogies.”

    (Hubert P. Yockey, Information Theory, Evolution, and the Origin of Life, Cambridge University Press, 2005)

  25. 25
    ribczynski says:

    To me that means you didn’t understand any of it. What part of statistical mechanics, combinatorial dependencies, structural requirements and coded instructions don’t you get?

    My comment addresses those alleged obstacles to Darwinian evolution.

  26. 26
    JT says:

    [It seems someone must have written something very similar to the following before. You can probably preface a lot of what I write with, “Can you name the original author of the following?”]

    Maybe if God had taken Mr. Menuge’s course in programming as an undergraduate there wouldn’t be so many bugs in his current project.

    I can see Mr. Menuge taking God aside after class one day to give him a word of warning:

    “God, I need to talk to you for a minute. Listen, you make think you have all the answers, but I want you to quit raising your hand so much in class because its really slowing down my presentation. You have to get it in your head that you’re fallible just like the rest of us, and as such your program is going to have tons of errors if you don’t break it down in advance exactly like I said. Then you have to do unit testing and integration testing because that’s the only way you’re going to find all the errors in your program before you deploy it. You’re well on the way to failing this class if you don’t shape up…”

    And all the while God is rolling his eyes and glancing at his watch.

    ———————

    I also wanted to note that the development of new technology always proceeds by coopting components as much as possible wholescale from completely unrelated sources. Anything that works well enough is used as is, and optimization of components for a particular task only proceeds incrementally through repeated subsequent prototypes.

    It seems to me that what we we see in the bilogical world is the unit and integration testing, or rather a program being iterated through repeated imperfect versions. I personally believe God could have just snapped his fingers and have the entire universe materialize, precisely as he desired, with no errors, no planning, no integration, no testing, no nothing – just BAM – here’s my final project, and take that Menuge, you dweeb.

    But that isn’t how it happened. Its the physical universe itself doing the designing and God is looking on and he’s the only one that knows what the final outcome will be. But something I’ve only come to slowly realize, is that this is a position completely compatible with I.D. It seems a lot of you hold such a position. It seems to be a card carrying member all you have to do is say that behind it all is a disembodied mind, or say that science has to sign off on that. I’m of the position that you just can’t put God in a box like that, or say that his “mind” functions like ours or some such. (“Who will you compare me to…” God says at one point in the Bible.) Not that I think ID people are worthless. It just seems pointless at times – each side throwing their severely limited knowledge in the other side’s face. Oh well.

    ——————————–

    [The following relates to this thread, BTW]

    In the thread Altruism, evolutionary psychology, and the heroes of Mumbai thread, vjtorley [21] had the following link to the article Can Animals Think. The results are inconclusive. The first part of it is composed entirely of incidents of animals displaying surprising intelligence (which vjtorley in his comments did not mention.) In the second half of the presentation however, you have example after example that seem to indicate that even the higher mammals do not solve problems like we do. They seem to try alternatives completly at random with no actual understanding of the problem at all, and stick with the first solution that works. So rats in a lab have to learn to push a lever, but one rat accidently hit the lever by falling over backwards and hitting it with his head, so that is the method he uses from that point on whenever he wants his reward. In another instance a dog is filmed learning to open a lever on a gate, and what he is doing is trying completely random motions over and over and over again with no understanding of the problem at all, until completely by chance he finds the right motion and so then he’s “learned” it. and knows it from that point on. I think the possible application to nature should be self-evident. And its as acceptable as an argument from analogy as the one being debated.

  27. 27
    JT says:

    Rereading his comments, vjtorley probably adequately indicated that the subject of animal consciousness is not resolved.

  28. 28
    JT says:

    I personally it could be a matter of life unfolding and developing in a random way, but reality itself (with reality being equivalent to God) is powerful enough that without any actual awareness on its part being necessary, can still cause everything to fall into place.

  29. 29
    mynym says:

    Does it eliminate the possibility that the subsystems of S had a different function or functions? No.

    There’s a difference between imaginary functions and sequences of co-option based on more imaginary evidence function that can be observed based on actual evidence. After all, it’s very easy for people to fail to deal with the logistics of the real world in imaginary scenarios of co-option and the like.
    Does it eliminate the possibility that S evolved with the help of [an imaginary] “scaffold” that was then eliminated [by imaginary events in the past]? No.

    There’s a difference between imagining things about the past and empirical observations which can be verified here and now.

    Then why the hoopla?

    It’s probably because most people put more stock in the application of the principles of engineering to empirical evidence having to do with function that exists in the real world than in imagining things about the past.

    The interesting thing about Darwinists is that they seem to believe that they are protecting or responsible for progress as we know it and yet the people who actually run the engine of progress tend toward ID and even creationism. Perhaps that is so because engineers have to deal with the way that things are in the real world and cannot rely on imagining things about the past as if doing so has anything to do with anything.

    Darwinists even seem to imagine that simply because the Darwinian creation myth is rooted in imagery of progression that they have something to do with protecting us from the Dark Ages while safeguarding Progress as we know it. Of course Darwinists have actually had little to do with progress as we know it while engineers who tend to be IDists and creationists generally have had something to do with the technology by which scientia/knowledge advances.

  30. 30
    Peter says:

    mynym:

    Of course Darwinists have actually had little to do with progress as we know it while engineers who tend to be IDists and creationists generally have had something to do with the technology by which scientia/knowledge advances

    Quite right! Your excellent post shows that in fact Darwinists are now holding back science by attempting to hold onto an outdated theory. As real science progresses it is becoming more and more evident that Darwinism is wholly inadequate to explain reality. That is why ribczynski must rely on sheer speculation.

    The history of the creation of life can be either discreet or continuous. Biblical creation is discreet. It is no great feat to postulate a continuous history. I can’t understand why so many people consider stating an obvious possibility to be ‘brilliant.’ I think religious motivation is definitely one reason why Darwinists stick to a theory that science shows to be nonsense.

  31. 31
    Freelurker says:

    mynym

    … while engineers who tend to be IDists and creationists generally have had something to do with the technology by which scientia/knowledge advances.

    Do you claim that engineers tend to be IDists and creationists? If so, can you back that up?

  32. 32
    van says:

    Menuge obviously has this figured out. Darwinists have been looking at life wrong for decades. The Top-down view of biology would force science to re-evaluate everything. Look at the so-called fossil record. This record of (assumed) change was based on the idea that over time, organisms, by chance, changed their parts out randomly, only then to be selected if they were deemed fit. This mindless process must have taken thousands or millions of years to transform animals into different kinds of animals. Thus, the millions and millions of years have been artificially inserted into the evolution of animals because Darwinian evolution (bottom-up evolution) REQUIRES it. But now, top-down evolution (self-organized intelligent responses to environmental changes) completely eliminates the need for time. The door then opens wide for YEC. In short, what has been interpreted for decades as “evolution,” — that is, evidence for common descent — in the field and fossil record may be nothing of the sort, but instead just small, top-down adjustments which require virtually no time and have nothing to do with the construction of any organism.

    This leaves two giant holes: 1) the whole of the origin of the genetic code and 2) the whole of the origin of humans and animals as wholes.

  33. 33
    van says:

    p.s. oops….two of the “wholes” in my last paragraph should have been “holes.” 🙂

    No edit feature here?

  34. 34
    mynym says:

    Do you claim that engineers tend to be IDists and creationists? If so, can you back that up?

    I could point out that even Darwinists have noticed: “…the Salem Hypothesis states that creationists with formal educations are more likely to be engineers than they are to be other kinds of scientists. This hypothesis is supported primarily by anecdotal evidence: a good number of creationists who post to talk.origins claim to be engineers, and creationist organizations seem to be disproportionately populated by engineers. Why engineers would be more prone to creationism than other scientists is a good question.”

    But is it so hard to figure out why that would be so given that they work with design problems in the real world every day instead of engaging in natural theology of a sort based on prissy or feminized Christianity?

    E.g.: “Why would God make a panda’s thumb like this? It is settled then, Nature selected it and designed all the millions of organisms that exist.”

  35. 35
    mynym says:

    Hmmm, ribczynski didn’t reply to comments based on pointing out the elementary distinction between imagining things and empirical evidence and the reason is obvious, what can generally be observed empirically is typically a form of irreducible complexity where if a part is taken away then a lack of function results. For sociological, psychological, political, theological or some other reason many scientists do not treat what is generally observed as the evidence that it is. They generally neglect empirical observation and instead focus on proposing “feasible evolutionary” routes in line with Darwinian reasoning: “If an organism could be found which I could not imagine coming about in a gradual sequence of events then my theory would absolutely break down.” For some reason those who are the first to blindly assert: “There is no evidence for ID.” also seem to be those most willing to cite their own imaginations as the equivalent of empirical evidence. If Darwinists can supposedly classify the arguments of ID as arguments from ignorance then it is time that someone classify their argument as the “argument from imaginary evidence.”

    Irreducible complexity isn’t really an “argument” similar to Darwinian reasoning, it’s generally an empirical observation which can be observed in the form and function of organisms. The capacity of Darwinists to imagine things doesn’t change empirical facts or explain the history of all biological specification, form and species.

Leave a Reply