Uncommon Descent Serving The Intelligent Design Community

Chance, Law, Agency or Other?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Suppose you come across this tree:

Tree Chair

You know nothing else about the tree other than what you can infer from a visual inspection.

Multiple Choice:

A.  The tree probably obtained this shape through chance.

B.  The tree probably obtained this shape through mechanical necessity.

C.  The tree probably obtained this shape through a combination of chance and mechanical necessity.

D.  The tree probably obtained this shape as the result of the purposeful efforts of an intelligent agent.

E.  Other.

Select your answer and give supporting reasons.

Comments
Bob [and F2XL): I see your protest on probability. I first note to you that F2 seems to have very good reason, in an "Expelled" world for not revealing a lot about himself. Now, I agree that it is better to address on the merits rather than dismissively, but I also think you are conflating two (or three) very different things. Namely:
1a] De novo origin of life and associated codes, algorithms and maintenance and executing machinery; and 1b] Similar de novo major body plans based on DNA codes of order 100's k - mns - ~ 3 bn base pairs.
--> That is, OOL and what has been descriptively called body-plan level macroevolution. With:
2] Essentially microevolutionary changes in a near neighbourhood in Hamming/Configuration space.
By way of illustration, consider yourself on a raft in a vast pacific, moving at random. There are relatively small islands, in archipelagos -- some of the islands being close together, some archipelagos being close-spaced, some with islands big enough to have mountain ranges. You start at an arbitrary location, with finite and limited resources. What are the odds that you will be able to first get to ANY island? [Negligible] If instead you start on any given island, what are the odds that you will be able to drift at random to a remote archipelago? [Negligible] By contrast, what are the odds you could drift among islands of a tightly spaced archipelago? [much higher, but low still] Similarly, what are the odds that you will be able to move at random from one peak to another at random? [Surprisingly low, but doable.] In short, the three-element probability chains you posed in 129 are vastly divergent from the actual issues at stake. Hence the force and sting in Meyer's 2004 remarks which I beg to remind onlookers, passed "proper peer review by renowned scientists":
The Cambrian explosion represents a remarkable jump in the specified complexity or "complex specified information" (CSI) of the biological world. For over three billions years, the biological realm included little more than bacteria and algae (Brocks et al. 1999). Then, beginning about 570-565 million years ago (mya), the first complex multicellular organisms appeared in the rock strata, including sponges, cnidarians, and the peculiar Ediacaran biota (Grotzinger et al. 1995). Forty million years later, the Cambrian explosion occurred (Bowring et al. 1993) . . . One way to estimate the amount of new CSI that appeared with the Cambrian animals is to count the number of new cell types that emerged with them (Valentine 1995:91-93) . . . the more complex animals that appeared in the Cambrian (e.g., arthropods) would have required fifty or more cell types . . . New cell types require many new and specialized proteins. New proteins, in turn, require new genetic information. Thus an increase in the number of cell types implies (at a minimum) a considerable increase in the amount of specified genetic information. Molecular biologists have recently estimated that a minimally complex single-celled organism would require between 318 and 562 kilobase pairs of DNA to produce the proteins necessary to maintain life (Koonin 2000). More complex single cells might require upward of a million base pairs. Yet to build the proteins necessary to sustain a complex arthropod such as a trilobite would require orders of magnitude more coding instructions. The genome size of a modern arthropod, the fruitfly Drosophila melanogaster, is approximately 180 million base pairs (Gerhart & Kirschner 1997:121, Adams et al. 2000). Transitions from a single cell to colonies of cells to complex animals represent significant (and, in principle, measurable) increases in CSI . . . . In order to explain the origin of the Cambrian animals, one must account not only for new proteins and cell types, but also for the origin of new body plans . . . Mutations in genes that are expressed late in the development of an organism will not affect the body plan. Mutations expressed early in development, however, could conceivably produce significant morphological change (Arthur 1997:21) . . . [but] processes of development are tightly integrated spatially and temporally such that changes early in development will require a host of other coordinated changes in separate but functionally interrelated developmental processes downstream. For this reason, mutations will be much more likely to be deadly if they disrupt a functionally deeply-embedded structure such as a spinal column than if they affect more isolated anatomical features such as fingers (Kauffman 1995:200) . . . McDonald notes that genes that are observed to vary within natural populations do not lead to major adaptive changes, while genes that could cause major changes--the very stuff of macroevolution--apparently do not vary. In other words, mutations of the kind that macroevolution doesn't need (namely, viable genetic mutations in DNA expressed late in development) do occur, but those that it does need (namely, beneficial body plan mutations expressed early in development) apparently don't occur.6
De novo creation of 100 mn base pairs -- even using huge amounts of gene duplication to get the space to do the information generation in -- will be essentially a search in a config space of order 4^100 (as, there is no constraint on which of G, C, A, T may end up at the points in the chain until functionality re-appears, itself another challenge for the duplicate chains]. Even if we do 3^100 mns, we still wind up in a space with ~2.96 *10^47,712,125 states. And, in that state space we know that stop codons "at random" will be quite common in odd points in the chain so we know that functional states will be sparse, exceedingly sparse. Such a search, on the gamut of our onbserved planet [or even cosmos] will be maximally unlikely to succeed. By contrast, we commonly observe that FSCI, even on the relevant scales, is a routine product of agency. GEM of TKI PS: DS, I'm sure you would have brushed with a Mac or an early workstation; these were dominated by the 32/16 680000 family. The 6500 family had a strange relationship with the 6800 family of course, indeed it was "inspired " by it; and peripherals could easily be mixed and matched, the 6522 VIA being especially nice. In turn the archi of the 6800 has a more than passing resemblance to being an 8-bitter version on the PDP 11. (DEC was of course bought out by Compaq, then HP . . .)kairosfocus
June 3, 2008
June
06
Jun
3
03
2008
03:11 AM
3
03
11
AM
PDT
F2XL, please explain why I'm wrong, rather accusing me of ignorance about probability theory. Or, if you don't want to do that, kindly show us your credentials so that we know you have the authority to judge others' numeracy.Bob O'H
June 2, 2008
June
06
Jun
2
02
2008
10:24 PM
10
10
24
PM
PDT
Sorry I can't keep posting on a regular schedule. Because of the amount of free time I have on my hands I will probably be better suited if I just make larger comments regarding this discussion on the flagellum and the X filter along with CSI during the weekends. I will likely start responding to comments from 129 on up. It seems like there is one commenter in particular who doesn't know how to take into account the odds of multiple events occurring. But yes, I haven't forgotten the task at hand. Nonetheless though I would like to quickly say that I agree with the notion that random number generators are pseudo-random, eventually they will repeat the same pattern if given enough trials (as far as I know).F2XL
June 2, 2008
June
06
Jun
2
02
2008
04:22 PM
4
04
22
PM
PDT
kf I'm trying to think of what computers I worked on with 6800 and 68K processors and am drawing a blank. I know there were some. Might have been video game consoles circa 1980 - 1982. I did some work with Apple IIx but those were 6502. Most of my work was with i80x86 hardware design and assembly language programming.DaveScot
June 2, 2008
June
06
Jun
2
02
2008
06:04 AM
6
06
04
AM
PDT
DS:
First computer I built was an Altair 8800 (i8080 uP) which I believe predated your 6800 Heathkit by a few years . . .
I'd say! (A real pity the 6800 - 68000 evolution ran out of steam . . .) GEM of TKIkairosfocus
June 2, 2008
June
06
Jun
2
02
2008
01:36 AM
1
01
36
AM
PDT
kf First computer I built was an Altair 8800 (i8080 uP) which I believe predated your 6800 Heathkit by a few years. First hardware design was an S-100 wirewrapped RS-232 card for the Altair. First program (that I recall) was a ~25 byte initialization sequence that had to be entered using binary front panel switches to initialize the RS-232 chip (I think it was an i8052). After the boot code was loaded in I could use an attached serial terminal to key in (using keys 0-9 and a-f) code faster. Good times. The next project was floppy disk controller.DaveScot
June 1, 2008
June
06
Jun
1
01
2008
05:43 AM
5
05
43
AM
PDT
PS: Homebrew version!kairosfocus
June 1, 2008
June
06
Jun
1
01
2008
02:37 AM
2
02
37
AM
PDT
Hi Dave I see your round slide rule. Mine was a general-purpose rule, with two hinged plastic pointers with hairlines, like the hands of a watch. They were set up so you could set the angles on a scale then advance the whole as required, reading off answers on one of as I recall five circular scales, as in like A through E. As Wiki discusses, there is a key wraparound capacity in the circular rule, and of course its linear dimension is 1/3 that of the equivalent linear job, thanks to pi. [Memory is a bit vague now! Decided to do a web search -- bingo, here it [or its first cousin] is; I gotta check my dad, as I think that I passed it back to him, complete with manual still in it. Manual is online at the just linked too. On further looking around, I think the unit is a Gilson Midget 4", right down to the case colour.] The key to the beasties is the power of logarithms. The key scales are log scales, and so they multiply/subtract by adding or taking away lengths on log scales. Multiplying/dividing logs gives you powers stuff [and conversion ofr log bases etc]. Y'know, come to think of it, my favourite log-linear and log-log graph paper or even ordinary paper with log scales etc are also analogue computers, with scientific visualisation tossed in. [Anyone here remember working with Smith Charts on T-lines etc?] Never thought of it that way before. (Next time I teach say A level physics or 1st yr college physics, I will have to remember that bit!) GEM of TKI PS: GP, my first computer was a 6800 family based SBC by Heathkit, which I assembled and used to teach myself machine code/assembly language programming. Still have it, though the manual was soaked and had to be discarded. [It sat in a barrel for several years here in M'rat, in storage as I had to be away from the volcano. Wonder if that manual is online?]kairosfocus
June 1, 2008
June
06
Jun
1
01
2008
02:32 AM
2
02
32
AM
PDT
My first computer was analog - a slide rule. Kairos mentioned that there were some round versions of slide rules. I have one of those, or at least one common version, and need to know how to use it. It's the E6B. They're still commonly used today.DaveScot
May 30, 2008
May
05
May
30
30
2008
04:59 PM
4
04
59
PM
PDT
Kairosfocus: thank you for your facsinating remembrances about analog computers. I really envy your background: my first experiences with a computer were with an Intel 8086, and I even tried to run a mandelbrot program on it (you can imagine with what results!).gpuccio
May 30, 2008
May
05
May
30
30
2008
03:35 PM
3
03
35
PM
PDT
On RNG's, Most people in the intelligent design camp are aware of the total lack of any truly beneficial random mutational events to account for the evolution of complexity we see in life (Dr. Behe; Edge of Evolution 2007). So the problem, first and foremost, for the Theistic IDer (Intelligent Design), is to actually prove that “mind” can have a notable effect on “random chance” that would be greater from normal random chance that would occur from the “normal” environment. This following studies offer the first tentative “baby steps” in that direction of positive proof for the Theistic IDer. Page 187 “Your Eternal Self” Hogan In the studies, random number generators (RNGs) around the world were examined after events that affected great numbers of people whether the numbers began to show some order during the events. During widely televised events that have captured the attention of many people, such as Princess Diana’s de^ath and the 9/11 tragedies, the combined output of the 60 RNGs around the world showed changes at the exact moments of the announcements of the events that could not be due to chance. To add control to their study researchers identified an event they knew was about to happen that would have an impact on large numbers of people and set up a study to measure the effects on RNGs in different parts of the world……. Oct 3, 1995, the OJ Simpson verdict was chosen: ,,,around the time that the TV preshows began, at 9:00 AM Pacific Time, an unexpected degree of order appeared in all RNGs. This soon declined back to random behavior until about 10;00 AM, which is when the verdict was supposed to be announced. A few minutes later, the order in all 5 RNGs suddenly peaked to its highest point in the two hours of data precisely when the court clerk read the verdict.,,, — For me this is verifiable and repeatable evidence that overcomes the insurmountable problems that “random chance” has posed to Darwinism and offers positive empirical proof of the mind over matter principle for the position held by Theistic IDers.bornagain77
May 30, 2008
May
05
May
30
30
2008
09:16 AM
9
09
16
AM
PDT
Hi Mavis [and GP]: I see I am dating myself. A walk down memory lane . . . 1] Analogue computers A long time ago, there were machines known as Analogue Computers, which in electronic form -- electromechanical, mechanical and hydraulic forms exist[ed?] too -- used operational amplifiers, potentiometers, diode function generators etc etc. They were used to set up [i.e. patch-cords, almost like an old fashioned telephone exchange!] differential equations, and then run them as simulations, often showing the result on a plotter or a CRT screen or the like. And yes, at a certain point I actually had my hand on one of the old beasties. Nowadays, it is all digital simulations on PCs -- cf Gil's simulation packages used to model various dynamics. I notice the Wiki article classes good old slip-sticks [slide-rules] as analogue computers too. Never really thought of that before, but yes, It'll do. [And, yes, I used to use one of those, as a student. My dad passed on to me a very special circular job, that still exists somewhere about.] 2] Getting "truly" random numbers If we were to do an analogue computer today, it would probably be integrated with a digital one. And so I guess my heritage is shown in my statement that one could integrate a Zener noise source to make truly random numbers. [Of course, one will need to compensate for statistical distributions, as GP pointed out.] The point being, that a Zener diode is a fair noise source, and we can build a circuit to take advantage of that. Then, we can rework that to get a well-distributed random number process. [NB: Back in the bad old days of my Dad's work in statistics circa 1960, a neat trick was to use the phone book to generate random digits, as the last 4 or so digits of a phone number are likely to be reasonably random.] Here is a random number service and its description of how it uses radio noise as a similar source off credibly truly random numbers. 3] On chance, necessity, intelligence This morning, I saw an offline remark by email, and now a remark in the thread that touches on this. M:
Levers lift things. What are these “mechanical necessities of the world” you speak of? What does that mean? If you mean we build things according to our understanding of mechanics (material behaviour, physics etc) then, well, of course! What else? . . . . The random number generators we create generate random numbers . . . When people make things they don’t throw things together randomly . . . . You instruct the thing you built to do what you want it to do via the mechanism you built in when you were creating it.
1 --> We observe natural regularities, e.g things tend to fall. These are associated with low contingencies and give us rules of physical/mechanical necessity. 2 --> e.g. Things that are heavy enough will fall, unless supported . . . cf Newton's apple and the moon, both of which were "falling" but under significantly different circumstances [non-orbital vs orbital motion, the role of centripetal forces . . .] that gave rise to the inference to the Law of Gravitation. 3 --> By contrast, certain circumstances show high contingency. This permits them to have high information storing capacity; e.g. the 26-state element known as the alphabetical character. [Alphanumeric characters extend this to 128 or more states per character.] 4 --> Outcomes for such high contingency situations may be driven by chance and/or intelligence. (E.g. we could use the Zener source to drive a truly random string of alphanumeric characters. This has in it intelligence to set up the situation, and randomness in the outcome.] 5 --> This, would also be a case of a cybernetic system that is intelligently designed and configured but takes advantage of natural regularities and chance processes. 6 --> The organised complexity reflects intelligent action, and the designed [and hard to achieve!] outcome is a credibly truly random alphanumeric character string. 7 --> Thus, agents -- per observation -- may design systems that integrate chance, necessity and intelligence in a functional whole. Thus, such is both logically and physically possible, as well as reasonably regularly empirically observed. This last entails that there is a significantly different from zero probability. 8 --> Further to this, we see that certain empirically observable, reliable signs of intelligence exist: organised complexity, functionally specified, complex information, systems that are integrated and have a core that is irreducible relative to achieving function. Indeed, we routinely use these signs in significant and momentous situations, to infer to intelligence at work, in day to day life and common sense reasoning, in forensics, in scientific work, in statistics, etc. 9 --> Further to this, we also notice that we do not see significant signs of a fourth causal pattern. [For millennia of thought on the subject, "Other" in the post's tile line has persistently remained empty; once we empirically trace out causal factors and patterns.] 10 --> All of this is a commonplace. The problem is that once we apply it to the fine tuning of the cosmos, the origin of the organised complexity and FSCI in cell based life, the origin of body-plan level biodiversity and the origin of a credible mind required to do the science etc, we come up with interesting inferences to intelligent action. And these otherwise well-warranted inferences do not sit well with a dominant school of thought among many sectors of the civilisation we are a part of, namely evolutionary materialism. 4] Fractals These are often generated by cybernetic systems, whether in a digital [or analogue -- Farmer et al did just that in the early days of chaos research] computer or in a biosystsem like a fern. [I gather blood vessels in the body also follow a fractal growth pattern -- maybe tha tis a compact way to get a branching network that reaches out to just about all the cells in the body.] In the case of a shoreline [one of the classic cases], we have forces of necessity and chance at work. Insofar as we may look at a snowflake as a fractal system, we see that there is a necessity imposed by the structure of the H2O molecule, giving rise to hexagonal symmetry. There is a pattern imposed by temperature [which type of flake forms]. Then when the conditions favour flat, dendritic flakes beloved of photographers, the presence of microcurrents and water molecules along the flake's path gives rise to the well-known complexity. [Cf discussion and links in the always linked APP 3] But observe: the fractals are produced by lawlike processes and/or some random inputs. That is why the EF catches the first aspect as "law." Next, observe that I do not normally discuss CSI but instead FSCI, as that makes the key point plain . . . 5] FSCI When a highly contingent situation exists, beyond the Dembski bound, and it also gives rise to a relatively rare functionally specified stare, then that is a reliable sign of intelligence. For example, look at the Pooktre chair tree. Trees branch, often in a fractal-like pattern -- probably functional in allowing them to capture sunlight. But here, we are dealing with evide4ntly intelligently constrained growth, producing a functional pattern recognisable as a chair. It is very rare in the configuration space, and it is functionally specified. It is information rich. It is an artifact, and one that was so identified by commenters "live" as being a real enough tree. [Had it been Photoshop that too would have been a different type of design.] It is logically and physically possible for a "natural" tree to assume such a shape by chance + necessity only, but so maximally improbable that we confidently and accurately infer to design as its best explanation. This is the same pattern that leads us to infer to design for say the nanotechnoloigy and information systems in the cell. For,t eh observed universe does not have anywhere near adequate probabilistic resources to be likely to generate the cell by chance + necessity across its credible lifespan. GEM of TKIkairosfocus
May 30, 2008
May
05
May
30
30
2008
01:54 AM
1
01
54
AM
PDT
Mavis: "The random number generators we create generate random numbers." No, that's not correct. They are pseudo random number generators, and they generate pseudo random numbers. No program can generate random numbers, because programs work by necessity. As Dembski has discussed in detail, generating a truly random sequence is conceptually a big challenge. Obviously, what a program can do is reading random seeds (external to the program itself) and elaborating them according to specific, and appropriate, necessary algorithms, so that the final sequence will look like a true random sequence. That's what is meant by "pseudo-random". In case you are not convinced, I paste here from Wikipedia: "There are two principal methods used to generate random numbers. One measures some physical phenomenon that is expected to be random and then compensates for possible biases in the measurement process. The other uses computational algorithms that produce long sequences of apparently random results, which are in fact completely determined by a shorter initial value, known as a seed or key. The latter type are often called pseudorandom number generators. A "random number generator" based solely on deterministic computation cannot be regarded as a "true" random number generator, since its output is inherently predictable. John von Neumann famously said "Anyone who uses arithmetic methods to produce random numbers is in a state of sin." How to distinguish a "true" random number from the output of a pseudo-random number generator is a very difficult problem. However, carefully chosen pseudo-random number generators can be used instead of true random numbers in many applications. Rigorous statistical analysis of the output is often needed to have confidence in the algorithm."gpuccio
May 29, 2008
May
05
May
29
29
2008
10:48 PM
10
10
48
PM
PDT
F2XL: Welcome back! I don't know if you had the time to read well my whole post about the calculation. I am happy we agree about taking 3 and not 4 as the possible chanhe space of a single mutation at a single site, but, as soon as you have time, I would really like to know your opinion aboutthe second point I make (indeed a more quantitatively relevant one), that the probability to obtain a specific single nucleotide mutation is 3 * 4.7 million, and not 3 to the 4.7 millionth power. The reason for that is that the number of possible single mutations, with one mutational events, is exactly that: 3 * 4.7 million. Instead, 4 to the 4.7 millionth power is the total number of combinations of the whole genome, that is the whole search space. In other words, there are 4^4.7million different sequences that a genome that long can assume. That's an important value, but it is not the one pertinent here. Anyway, that consideration should not affect much your reasoning, because, if I am right, the real probability for a single coordinated mutation of 490 specific nucleotides, after 490 mutational events, is however low enough to give strength to any possible reasoning against chance, being (again, if I am not wrong) equal to the probability for a single mutation at the 490th power, that is about 1 : 10^3430. There are inreality other small adjustments to consider, for instance the redundancy of the genetic code which allows for synonimous mutations, but that would not change much the ordere of the result. So, I believe that anyway the order of probability is so low that you can confifently go on with your argument, but it is important that we all agree (including our friendly "adversaries") on how to compute it, to avoid possible misunderstandings. Again, if I am wrong in my calculations (perfectly possible, I am not a mathematician), I would appreciate the input of someone who can give us the correct mathemathical and statistical perspective.gpuccio
May 29, 2008
May
05
May
29
29
2008
10:37 PM
10
10
37
PM
PDT
Mavis: "If there is some doubt, as you say “necessarily”, could you give me the circumstances under which a fractal will have none, some, alot, a large amount of CSI?" No, I was not clear enough. There is no doubt that a fractal has not CSI. Indeed, I wrote "is not CSI, and is not classified as necessarily designed by the EF": the "necessarily" does not refer to the nature of a fractal, but to the nature of the EF. The EF detects, CSI, not design. CSI implies designe, but design does not imply CSI. Therefore, if a piece of information id detected by the EF as having CSI, it is interpreted as "necessarily designed" by the filter itself. On the contrary, if the same piece of information has not CSI (the case of a fractal), then the EF cannot judge if it is designed or not. But it can certainly affirm that it has not CSI, so it is not "necessarily designed". Is that clear?gpuccio
May 29, 2008
May
05
May
29
29
2008
10:16 PM
10
10
16
PM
PDT
I see F2XL hasn’t appeared here. I’ll wait for his response to my last comments. Perhaps he has a real life as well. :-) Good observation. :) Yeah I probably won't be able to get back on here 'till this Saturday due to a douche professor few schedule conflicts. But don't worry about getting sidetracked, I'll still come back on here and continue from where I left off. After taking a quick skim of the comments on here, I noticed gpuccio made an interesting observation on what I was doing. After reading what he said, I noticed that I did in fact make an error which he was able to point out. I initially used 4 base pairs as a reference point to the different outcomes you could get when changing a piece of information for the homologs. But since there are four total, and thus only THREE other possible ways an existing base pair can change, the odds for a single mutation (out of a conservative estimate of 490) that can help beat the 5 criteria and thus pass the neutral gap, you would take 3 to the 4.7 millionth power (as gpuccio pointed out). So the new odds (single mutation) would be on the order of less then one chance in 10 to the 2,242,000th power. I was confusing increasing information with changing existing information in my numbers. I'll move on and respond to some of the other comments Saturday morning, so don't worry about getting sidetracked.F2XL
May 29, 2008
May
05
May
29
29
2008
07:08 PM
7
07
08
PM
PDT
PS: Mavis, cybernetic systems are based on our insight into the mechanical necessities of the world.
Levers lift things. What are these "mechanical necessities of the world" you speak of? What does that mean? If you mean we build things according to our understanding of mechanics (material behaviour, physics etc) then, well, of course! What else?
We set up entities that then reliably do as instructed [even generate pseudo or credibly actual random numbers — no reason a Zener noise source cannot be put into a PC for instance].
The random number generators we create generate random numbers.
But, the configuration of components to make up a system is anything but a random walk.
When people make things they don't throw things together randomly.
Then, we program them [assuming the system is programmable separate from making up the hardware config].
You instruct the thing you built to do what you want it to do via the mechanism you built in when you were creating it.
We patch analogue computes and adjust pots, putting in diode function generators etc, we tune control systems [maybe set up adaptive ones . . .],
Who is this "we" you speak of? Do you do all those things?
we write and load software.
Do we?
Some of that stuff generates fractals.
Yes, it's that stuff I was trying to talk about before all the wordy distractions.
We can compare fern growth a la screen with real ferns, noting self-similarity and scaling patterns etc (well do I remember doing and playing with such coding).
And what conclusions did you come to when you were so playing?
But in so doing, we must ask how do fern leaves grow?
Must we? I thought it was about fractals still?
Ans, accor to an inner program, i.e. we are back at the program.
Sigh. What else would tell a fern leaf how to grow apart from a set of instructions telling a fern leaf how to grow. Or rather, a set of rules. gpuccio
A fractal output, in itself, is not CSI, and is not classified as necessarily designed by the EF
If there is some doubt, as you say "necessarily", could you give me the circumstances under which a fractal will have none, some, alot, a large amount of CSI? And how would one perform the calculation to determine that? After all, you are not just assuming that it will not have measurable CSI without performing the calculation?Mavis Riley
May 29, 2008
May
05
May
29
29
2008
02:35 PM
2
02
35
PM
PDT
Basically, this review describes evidence that something that appears to be irreducibly complex can have functional intermediates, and that fitness can increase along paths in sequence space. So, Behe just shifts the goalposts: he says this is minor, and that bigger shifts would be impossible.
Eh? Behe's been saying the same thing for years before that article was published. Ditto for other ID proponents. I can't remember where I read/heard it but Behe previously talked about "weak IC" (or maybe it was someone else using that phrasing, reporting on what he said) that's composed of a couple components, and how Darwinian indirect pathways should be capable of producing such structures. I remember Dembski talking about possible pathways, including gene duplications, for modifying existing CSI in a book. It's been years since I read it, but he's always acknowledged that minor islands of functionality should be accessible. And this isn't an issue of definitions...I think ID proponents have always been very clear on what is considered by "minor" and "trivial".
Well, go ahead mate and collect the evidence!
Okay. The Ohno's Dilemna paper is addressing methods by which gene duplicates might be preserved long enough for one copy to diverge in function. Just scanning the abstract suggests that they are working from a model that assumes the starting gene had several activities to start with, one at a very low level. After duplication one copy is subject to selection for the low-level function, allowing divergence over time. This model of promiscuous function has been proposed in similar form by a number of other people. The problem with all such models is that they assume that there will be overlapping low level functions available somewhere in the genome (or biosphere if you allow for horizontal gene transfer) for any conceivable desirable step. Check out the following paper for a refutation of that idea, though that is not how they frame it. Multicopy Suppression Underpins Metabolic Evolvability Wayne M. Patrick,1 Erik M. Quandt, Dan B. Swartzlander, and Ichiro Matsumura Mol. Biol. Evol. 24(12):2716–2722. 2007 doi:10.1093/molbev/msm204 Department of Biochemistry, Center for Fundamental and Applied Molecular Evolution, Emory University, Atlanta, Georgia
Our understanding of the origins of new metabolic functions is based upon anecdotal genetic and biochemical evidence. Some auxotrophies can be suppressed by overexpressing substrate-ambiguous enzymes (i.e., those that catalyze the same chemical transformation on different substrates). Other enzymes exhibit weak but detectable catalytic promiscuity in vitro (i.e., they catalyze different transformations on similar substrates). Cells adapt to novel environments through the evolution of these secondary activities, but neither their chemical natures nor their frequencies of occurrence have been characterized en bloc. Here, we systematically identified multifunctional genes within the Escherichia coli genome. We screened 104 single-gene knockout strains and discovered that many (20%) of these auxotrophs were rescued by the overexpression of at least one noncognate E. coli gene. The deleted gene and its suppressor were generally unrelated, suggesting that promiscuity is a product of contingency. This genome-wide survey demonstrates that multifunctional genes are common and illustrates the mechanistic diversity by which their products enhance metabolic robustness and evolvability.
In brief, they knocked out 104 different metabolic genes in E coli, then asked if any of the E coli genes in its entire genome was able to rescue the cells when vastly overexpressed. Take home message: out of 104 genes knocked out, only 20 could be replaced at all. That leaves 84 unrescued genes that could not be replaced by promiscuous activity or any other mechanism. Another thing to note is the presumption of the model.
Before duplication, the original gene has a trace side activity (the innovation) in addition to its original function.
Notice that the end of the search, the innovation, is already present at the beginning. How unlikely yet how convenient! Other relevant info. http://www.proteinscience.org/cgi/reprint/ps.04802904v1.pdf (by Behe and Snoke August 2004) Eytan H. Suchard, "Genetic Algorithms and Irreduciblity," Metivity Ltd
Genetic Algorithms are a good method of optimization if the target function to be optimized conforms to some important properties. The most important of a is that the sought for solution can be approached by cumulative mutations such that the Markov chain which models the intermediate genes has a probability that doesn't tend to zero as the gene grows. In other words each improvement of the gene -set of 0s and 1s follows from a reasonable edit distance - minimum number of bits that change between two genes -and the overall probability of these mutations does not vanish. If for reaching an improvement, the edit distance is too big then GAs are not useful even after millions of generations and huge populations of millions of individuals. If on the other hand the probability of a chain of desired mutations tends to zero as the chain grows then also the GA fails. There are target functions that can be approached by cumulative mutations but yet, statistically defy GAs. This short paper represents a relatively simple target function that its minimization can be achieved stepwise by small cumulative mutations but yet GAs fail to converge to the right solution in ordinary GAs.
A two-part paper by phylo Royal Truman and Peter Borger titled "Genome truncation vs mutational opportunity: can new genes arise via gene duplication?" Here is the abstract of Part 1:
Gene duplication and lateral gene transfer are observed biological phenomena. Their purpose is still a matter of deliberation among creationist and Intelligent Design researchers, but both may serve functions in a process leading to rapid acquisition of adaptive phenotypes in novel environments. Evolutionists claim that copies of duplicate genes are free to mutate and that natural selection subsequently favours useful new sequences. In this manner countless novel genes, distributed among thousands of gene families, are claimed to have evolved. However, very small organisms with redundant, expressed, duplicate genes would face significant selective disadvantages. We calculate here how many distinct mutations could accumulate before natural selection would eliminate strains from a gene duplication event, using all available 'mutational time slices' (MTSs) during four billion years. For this purpose we use Hoyle's mathematical treatment for asexual reproduction in a fixed population size, and binomial probability distributions of the number of mutations produced per generation. Here, we explore a variety of parameters, such as population size, proportion of the population initially lacking a duplicate gene (x0), selectivity factor(s), generations (t) and maximum time available. Many mutations which differ very little from the original duplicated sequence can indeed be generated. But in four billion years not even a single prokaryote with 22 or more differences from the original duplicate would be produced. This is a startling and unexpected conclusion given that 90% and higher identity between proteins is generally assumed to imply the same function and identical three dimensional folded structure. It should be obvious that without new genes, novel complex biological structures cannot arise.
Here is the abstract of Part 2:
In 1970, Susumo Ohno proposed gene and genome duplications as the principal forces that drove the increasing complexity during the evolution from microbes to microbiologists. Today, evolutionists assume duplication followed by neo-functionalization is the major source of new genes. Since life is claimed to have started simple and evolved new functions, we examined mathematically the expected fate of duplicate genes. For prokaryotes, we conclude that carrying an expressed duplicate gene of no immediate value will be on average measurably deleterious, preventing such strains from retaining a duplicate long enough to accumulate a large number of mutations. This genome streamlining effect denies evolutionary theory the multitude of necessary new genes needed. The mathematical model to simulate this process is described here.
Andreas Wagner, “Energy Constraints on the Evolution of Gene Expression,” Molecular Biology and Evolution, 2005 22(6):1365-1374; doi:10.1093/molbev/msi126
I here estimate the energy cost of changes in gene expression for several thousand genes in the yeast Saccharomyces cerevisiae. A doubling of gene expression, as it occurs in a gene duplication event, is significantly selected against for all genes for which expression data is available. It carries a median selective disadvantage of s > 10?5, several times greater than the selection coefficient s = 1.47 x 10?7 below which genetic drift dominates a mutant’s fate. When considered separately, increases in messenger RNA expression or protein expression by more than a factor 2 also have significant energy costs for most genes. This means that the evolution of transcription and translation rates is not an evolutionarily neutral process. They are under active selection opposing them. My estimates are based on genome-scale information of gene expression in the yeast S. cerevisiae as well as information on the energy cost of biosynthesizing amino acids and nucleotides.
Royal Truman's 2006 article "Searching for Needles in a Haystack"
The variability of amino acids in polypeptide chains able to perform diverse cellular functions has been shown in many cases to be surprisingly limited. Some experimental results from the literature are reviewed here. Systematic studies involving chorismate mutase, TEM-1 ? lactamase, the lambda repressor, cytochrome c and ubiquitin have been performed in an attempt to quantify the amount of sequence variability permitted. Analysis of these sequence clusters has permitted various authors to calculate what proportion of polypeptide chains of suitable length would include a protein able to provide the function under consideration. Until a biologically minimally functional new protein is coded for by a gene, natural selection cannot begin an evolutionary process of fine-tuning. Natural selection cannot favour sequences with a long term goal in mind, without immediate benefit. An important issue is just how difficult statistically it would be for mutations to provide such initial starting points. The studies and calculations reviewed here assume an origin de novo mainly because no suitable genes of similar sequence seem available for these to have evolved from. If these statistical estimates are accepted, then one can reject evolutionary scenarios which require new proteins to arise from among random gene sequences.
Patrick
May 29, 2008
May
05
May
29
29
2008
07:09 AM
7
07
09
AM
PDT
Bob O'H: In the meantime, while we wait for F2XL, I would like to comment on the mathemathical aspect which has been controversial between you and him, because I have a feeling that both are wrong. Maybe I am wrong too, but I would like to check. As I understand it, F2XL has put the question in these terms: 1) E. cole has a genome of 4.7 million base pairs. 2)We have to account for 490 specific mutations (I will not discuss this number, and will go on from here). Now, I think the wy to reason is: The probabilty of having a sppecific nucleotide substitution, if we have a single mutational event, is: a) 1 : (3 * 4.7 million), that is 1 : 1.41*10^7. Let's say 1:10^7 to simplify computations. Why? Because each mutational event can change in three diffrent ways each single point (for instance, if you have A at one point, and it mutates, it can become T, C or G), and the single mutational event can happen at any of the nucletide sites in the genome. So, I think F2XL is wrong here, because he takes 4 instead of 3, andmakes a power of the length of the genome, instead of just multiplying it. It is correct to have the length of the genome as a power only if you are computing all possible combinations of nucleotides with that length, which is not the case here. So, let's go on: The next question is, what is the probability of having a specific combination of 490 mutations, if we have 490 single mutational events? Notice that here the order in which the events happen has no importance, we can just the same consider them contemporary, or happening in any order. Instead, the order of the final mutations in the genome is fixed: we are looking for 490 definite mutation a those specific 490 sites. Are we OK with that? Well, I think that the problem is similar to this one: if I have three dies on a table, in a specific order (the nucleotide sites I want to change) and I flip each coin once (in any possible cronological order, that doesn't matter, provided I keep the order of the coins on the table). What is the probability of having, in the end, a specific sequence (say, three heads)? The combination of sequences are 2^3, that is 8, and the probability of a specific combination is 1/8, that is 0.125. That can be obtained multiplying he probabilities of each single event (0.5*0.5*0.5, that is 0.5^3, that is 0.125). The same is valid for our specific comnbination of 490 mutations with 490 mutational events, in no specific cronological order, but in a very specific order in the genome. As the probability of each event is, see point a), 7*10^-8, the probability of our specific 490 nucleotudes mutation, after 490 random mutational events, should be: b) 1 : (10^7)^490, that is of the order of 1 : 10^3430 That's my final probability for the specific 490 nucleotides mutation after 490 mutational events. It's a really low probability, far beyond any conceivable UPB, but it is not the same as computed by F2XL, or, as far as I understand, by you. May be I am wrong. Am I? However it is, I think we have to arrive at a correct computation...gpuccio
May 29, 2008
May
05
May
29
29
2008
07:05 AM
7
07
05
AM
PDT
Bob O'H: Sorry for sidetracking you. I thought it was you who threw into the discussion the paper about traversing landscapes... Anyway, I am waiting for F2XL too.gpuccio
May 29, 2008
May
05
May
29
29
2008
05:37 AM
5
05
37
AM
PDT
Folks, you're now throwing examples at me that have nothing to do with the problem we were discussing. I don't want to get side-tracked from F2XL's problem, so I won't respond here. I'm sure another post will appear at some point to discuss these matters further. Sorry for this, but I'd like to find out if F2XL's calculation of CSI is valid. If we wander off onto other topics, he might decide we're not interested any more, and not reply.Bob O'H
May 29, 2008
May
05
May
29
29
2008
04:49 AM
4
04
49
AM
PDT
Bob O'H: I have found, I think, the abstract of the first paper (but I remember I read the full paper, so I will go on looking for it). It should be the following: Evolution of Hormone-Receptor Complexity by Molecular Exploitation Jamie T. Bridgham, Sean M. Carroll, Joseph W. Thornton* Abstract: According to Darwinian theory, complexity evolves by a stepwise process of elaboration and optimization under natural selection. Biological systems composed of tightly integrated parts seem to challenge this view, because it is not obvious how any element's function can be selected for unless the partners with which it interacts are already present. Here we demonstrate how an integrated molecular system—the specific functional interaction between the steroid hormone aldosterone and its partner the mineralocorticoid receptor—evolved by a stepwise Darwinian process. Using ancestral gene resurrection, we show that, long before the hormone evolved, the receptor's affinity for aldosterone was present as a structural by-product of its partnership with chemically similar, more ancient ligands. Introducing two amino acid changes into the ancestral sequence recapitulates the evolution of present-day receptor specificity. Our results indicate that tight interactions can evolve by molecular exploitation—recruitment of an older molecule, previously constrained for a different role, into a new functional complex. Just to start the discussion, and without entering in detail about the procedure of "using ancestral gene resurrection" and its possible biases, I just ask you: do you really think that an artificial lab work which modifies just two aminoacids, simply altering the affinity of a receptor for very similar ligands, is evidence of anything? What is it showing? That very similar interactions can be slightly modified in what is essentially the same molecule by small modifications? Who has ever denied that? I am sorry, but I must say that Behe is perfectly right here. When we ask for a path, we are asking for a path, not a single (or double) jump from here to almost here. I will be more clear: we need a model for at least two scenarios: 1) A de novo protein gene. See for that my detailed discussion in the relevant thread. De novo protein genes, which bear no recognizable homology to other proteins, are being increasingly recognized. They are an empirical fact, and they must be explained by some model. The length of these genes is conspicous (130 aminoacids in the example discussed on the thread). The search space huge. Where is the traversing apparatus? What form could it take? 2) The transition from a protein with a function to one other with another function, where the functions are distinctly different, and the proteins are too. Let's say that they present some homology, say 30%, which lets darwinist boast that one is the ancestor of the other. That's more or less the scenario for some proteins in the flagellum, isn't it? Well we still have a 70% difference to explain. That's quite a landscape to traverse, and the same questions as at point 1) apply. You cannot explain away these problems with examples of one or two muations bearing very similar proteins, indeed a same protein with slightly different recognition code. It is obvious hat even a single aminoacid can deeply affect recognition. You must explain different protein folding, different function (not just the same function on slightly different ligands), different protein assembly. That's the kind of problems ID has always pointed out. Behe is not just "shifting the goalposts". The goalposts have never been there. One or two aminoacid jumps inside the same island of functionality have never been denied by anyone, either logically or empirically. They are exactly the basic steps which you should use to build your model pathway: they are not the pathway itself. Let's remember that Behe, in TEOE, places exactly at two coordnated aminoacid mutations the empirical "edge", according to his reasonings about malaria parasite mutations. You can agree or not, but that is exactly his view. He is not shifting anything.gpuccio
May 29, 2008
May
05
May
29
29
2008
01:42 AM
1
01
42
AM
PDT
Bob O'H: Obviously, I agree with kairosfocus' comments, summed up in the following: "Yes, an already existing protein may bounce around on its hill or functionality, maybe even move across to a close enough neighbouring peak. But that has nothing to do with: [1] ab initio, getting to cell based life with its nanotechnologies, from monomers in prebiotic soups [cf the now available discussion on prebiotic soups in TMLO. If you need it Foxit will download the file.] [2] the integrated cluster of shifts in cells, tissues, organs and systems to get to novel body plans." More in detail, I remember reading with attention the opaper about cortiocoid receptors, and being really disappointed with it, while I don't remember reading the second one you mention. While I agree in general with Behe's comments, I will probably give you my specific view, if I can find and access the original papers. Discussing real examples is exactly what can bring our discussion to better achievements.gpuccio
May 29, 2008
May
05
May
29
29
2008
12:57 AM
12
12
57
AM
PDT
Bob: Here is the relevant problem you need to traverse to get to a place where you can confidently say that evolutionary intermediates are not an issue:
The Cambrian explosion represents a remarkable jump in the specified complexity or "complex specified information" (CSI) of the biological world. For over three billions years, the biological realm included little more than bacteria and algae (Brocks et al. 1999). Then, beginning about 570-565 million years ago (mya), the first complex multicellular organisms appeared in the rock strata, including sponges, cnidarians, and the peculiar Ediacaran biota (Grotzinger et al. 1995). Forty million years later, the Cambrian explosion occurred (Bowring et al. 1993) . . . One way to estimate the amount of new CSI that appeared with the Cambrian animals is to count the number of new cell types that emerged with them (Valentine 1995:91-93) . . . the more complex animals that appeared in the Cambrian (e.g., arthropods) would have required fifty or more cell types . . . New cell types require many new and specialized proteins. New proteins, in turn, require new genetic information. Thus an increase in the number of cell types implies (at a minimum) a considerable increase in the amount of specified genetic information. Molecular biologists have recently estimated that a minimally complex single-celled organism would require between 318 and 562 kilobase pairs of DNA to produce the proteins necessary to maintain life (Koonin 2000). More complex single cells might require upward of a million base pairs. Yet to build the proteins necessary to sustain a complex arthropod such as a trilobite would require orders of magnitude more coding instructions. The genome size of a modern arthropod, the fruitfly Drosophila melanogaster, is approximately 180 million base pairs (Gerhart & Kirschner 1997:121, Adams et al. 2000). Transitions from a single cell to colonies of cells to complex animals represent significant (and, in principle, measurable) increases in CSI . . . . In order to explain the origin of the Cambrian animals, one must account not only for new proteins and cell types, but also for the origin of new body plans . . . Mutations in genes that are expressed late in the development of an organism will not affect the body plan. Mutations expressed early in development, however, could conceivably produce significant morphological change (Arthur 1997:21) . . . [but] processes of development are tightly integrated spatially and temporally such that changes early in development will require a host of other coordinated changes in separate but functionally interrelated developmental processes downstream. For this reason, mutations will be much more likely to be deadly if they disrupt a functionally deeply-embedded structure such as a spinal column than if they affect more isolated anatomical features such as fingers (Kauffman 1995:200) . . . McDonald notes that genes that are observed to vary within natural populations do not lead to major adaptive changes, while genes that could cause major changes--the very stuff of macroevolution--apparently do not vary. In other words, mutations of the kind that macroevolution doesn't need (namely, viable genetic mutations in DNA expressed late in development) do occur, but those that it does need (namely, beneficial body plan mutations expressed early in development) apparently don't occur.6
Yes, an already existing protein may bounce around on its hill or functionality, maybe even move across to a close enough neighbouring peak. But that has nothing to do with: [1] ab initio, getting to cell based life with its nanotechnologies, from monomers in prebiotic soups [cf the now available discussion on prebiotic soups in TMLO. If you need it Foxit will download the file.] [2] the integrated cluster of shifts in cells, tissues, organs and systems to get to novel body plans. And in that context, the flagellum is a useful toy example, one that F@ has already long since shown runs into serious probabilistic resource constraints, never mind the various tangents that may distract us from the central point. [Remember, per fall of France 1940, such distraction has been a core component of say Blitzkrieg -- it may win a rhetorical battle but it does not adequately address the fundamentals of the issue.] Bob, you need to show us that there is a credible route from the assumed tail-less E coli and the tailed one. We know that intelligences can traverse such search spaces, but we have no good reason to see that the abstract possibility that chance can do so will have any material effect in the real world where we have to address availability of search resources. Remember we are talking about dozens of proteins, and a self assembly system that has to have sufficiently functional intermediates that natural selection and the like can reinforce them into niches, thence they move on tot he next level. (And the TTSS seems to be more of a subset derivative than a precursor, i.e the code embeds the subset functionality.) GEM of TKI PS: Mavis, cybernetic systems are based on our insight into the mechanical necessities of the world. We set up entities that then reliably do as instructed [even generate pseudo or credibly actual random numbers -- no reason a Zener noise source cannot be put into a PC for instance]. But, the configuration of components to make up a system is anything but a random walk. Then, we program them [assuming the system is programmable separate from making up the hardware config]. We patch analogue computes and adjust pots, putting in diode function generators etc, we tune control systems [maybe set up adaptive ones . . .], we write and load software. Some of that stuff generates fractals. We can compare fern growth a la screen with real ferns, noting self-similarity and scaling patterns etc (well do I remember doing and playing with such coding). But in so doing, we must ask how do fern leaves grow? Ans, accor to an inner program, i.e. we are back at the program.kairosfocus
May 29, 2008
May
05
May
29
29
2008
12:47 AM
12
12
47
AM
PDT
gpuccio - the paper is a review of several pieces of work on molecular evolution, each showing that there is a landscape that can be traversed. They even mention two examples (hormone detection by steroid receptors and repressor–operator binding in the E. coli lac system) where a "lock and key" mechanism can evolve. Behe's reaction is typical (hmm, somehow I think we might find ourselves in disagreement here). Basically, this review describes evidence that something that appears to be irreducibly complex can have functional intermediates, and that fitness can increase along paths in sequence space. So, Behe just shifts the goalposts: he says this is minor, and that bigger shifts would be impossible. Well, go ahead mate and collect the evidence! I see F2XL hasn't appeared here. I'll wait for his response to my last comments. Perhaps he has a real life as well. :-)Bob O'H
May 28, 2008
May
05
May
28
28
2008
10:40 PM
10
10
40
PM
PDT
Mavis: I was going to answer, but Patrick has anticipated me. I can see not contradiction between what Patrick has said and whtc both kairosfocus and me have said. tha concept is simple. A fractal output, in itself, is not CSI, and is not classified as necessarily designed by the EF (let's remember, however, hat it could be designed just the same. The EF can well have false negatives, indeed all designed things which have not enough complexity will escape the EF). In the same way, the fractal procedure for computing the fractal output, if simple enouigh, is not CSI. But if that procedure is part of a longer code which uses it in a complex context, then the whole code would exhibit CSI, although an isolated part of it may not exhibit it. In the same way, in a computer program a single instruction may not be complex enough to exhibit CSI, but a functional sequence of 100 instruction is CSI. I hope that answers your question.gpuccio
May 28, 2008
May
05
May
28
28
2008
05:03 PM
5
05
03
PM
PDT
Even if you could not decide between two very similar textures, one generated manually and one proceduraly? How can specification matter at that point?
Designers can use multiple methods/tools to reach an intended result. How the actor acted is a separate question. https://uncommondescent.com/intelligent-design/how-does-the-actor-act/
Does it?
Rhetorical question... They're intended to "encourage the listener to reflect on what the implied answer to the question must be." Ponder comment #152 especially.
Who’s right?
We are all correct. kf: It is the programs and formulae that generate them that pass the EF. gpuccio: Obviously, the system which computes the fractal is a completely different thing… me (149): the systems generating the complexity [fractals in this case] are taken into account gpuccio:
In theory, there could be fractal parts in the non coding DNA, but I am not aware of evidences of that.
Check out fractogene.com to contact those who are looking for such evidence. http://www.junkdna.com/fractogene/05_simons_pellionisz.pdfPatrick
May 28, 2008
May
05
May
28
28
2008
04:51 PM
4
04
51
PM
PDT
An intended result can be reached via algorithm when intelligence is involved.
The question asked, by you, was
Does the usage of a procedural texture mean that a rendering incorporating such a feature is not designed?
A rendering is a artificial construction and by it's very nature is designed. My point is that the specific detail generated by procedural textures is "designed" in the same way that the differences between blades of grass are "designed". Not predictable except in the general case. Earlier you said
It would certainly affect the calculating of informational bits–the systems generating the complexity are taken into account, along with the reduced amount of information necessary to represent the entire object–but I hope that makes it obvious how silly your objection is (aka the presence of fractals does not equate to the EF always returning a false).
You appear to be saying it makes a difference if the source of the texture makes a difference, even if the two were very similar in appearance.
The difference is the lack of Specification.
Even if you could not decide between two very similar textures, one generated manually and one proceduraly? How can specification matter at that point?
Essentially what you’re doing is rephrasing the old tired objection that Dawkins made about “apparent design”.
In fact I was attempting to get an answer to the original point you asked
Does the usage of a procedural texture mean that a rendering incorporating such a feature is not designed?
Does it?
A designed object can contain pseudo-random attributes.
I'm not saying it can't. Previously KariosFocus said
PS: Fractals do NOT pass the EF — they are caught as “law” — the first test. It is the programs and formulae that generate them that pass the EF. [And, these are known independently to be agent-originated, so they support the EF’s reliability.] .
And gpuccio said
A fractal is a good example of a product of necessity. So, it does not exhibit CSI, becuase the EF has to rule out those forms of self-organization produced by necessary law. Obviously, the system which computes the fractal is a completely different thing… Moreover, I don’t think that a fractal in itself has function, so it would not be functionally specified.
And you, Patrick said
but I hope that makes it obvious how silly your objection is (aka the presence of fractals does not equate to the EF always returning a false).
Who's right?Mavis Riley
May 28, 2008
May
05
May
28
28
2008
04:02 PM
4
04
02
PM
PDT
Bob O'H: Any comment on Behe's comments? I do hope nyour mysterious "example" is not the one about hormonal receptors, which I read somr tine ago. That would really be a disappointment...gpuccio
May 28, 2008
May
05
May
28
28
2008
03:51 PM
3
03
51
PM
PDT
Mavis: I feel a little sidetraccked by your comments about fractals. Let's review things, as I see them: 1) Fractals, in themselves, are not CSI. Therefore, they cannot be recognized by the EF as designed (though they certainly can be designed. I hope the difference is clear). The mechanism which computes the fractal could exhibit CSI, but that has to be evaluated in each single case. 2) Fractal forms occur in nature, both in non living and in living things. I admit that your vegetable seems remarkable... 3) The information in DNA is not fractal (at least, not the protein coding sequences. In theory, there could be fractal parts in the non coding DNA, but I am not aware of evidences of that. 4) CSI is not fractal. The information which specifies a functional protein is not fractal. It has to be found either from knowledge of the physical properties of protein sequencies (such as folding), and we are not yet able to do that, or found by guided random search coupled with specific measurement of the searched function (like in protein engineering). No fractal formula will tell us which aminoacid sequence will give a protein which folds in a certain way, and which has a specific enzymatic activity, no more than a fractal formula can give us the text of Hamlet. 5) We have practically no idea of what codes for the macroscopic form of multicellular beings. If we find some fractal aspect in macroscopic (or microscopic) parts, such as your vegetable, or, say, the patterns of arborization of vessels, or anything else, we cannot say that that particular aspect of form exhibits CSI (which does not imply that it is not designed). If we knew the mechanism which generates the fracta (which we don't) we could try to compute if it exhibits CSI or not. However, most macroscopic forms of living beings do not appear to be fractal. 6) Even in computer programming, fractals can be used in specific procedures, like compression, but have you ever seen a functioning program code generated by fractal formulas? Procedures are not fractal. Program code is not fractal. The same can be said of biological information. Most information present in living beings is not fractal, does exhibit CSI, and therefore requires a designer.gpuccio
May 28, 2008
May
05
May
28
28
2008
03:48 PM
3
03
48
PM
PDT
1 2 3 4 7

Leave a Reply