Uncommon Descent Serving The Intelligent Design Community

Evolution driven by laws? Not random mutations?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

So claims a recent book, Arrival of the Fittest, by Andreas Wagner, professor of evolutionary biology at U Zurich in Switzerland (also associated with the Santa Fe Institute). He lectures worldwide and is a fellow of the American Association for the Advancement of Sciences.

From the book announcement:

Can random mutations over a mere 3.8 billion years solely be responsible for wings, eyeballs, knees, camouflage, lactose digestion, photosynthesis, and the rest of nature’s creative marvels? And if the answer is no, what is the mechanism that explains evolution’s speed and efficiency?

In Arrival of the Fittest, renowned evolutionary biologist Andreas Wagner draws on over fifteen years of research to present the missing piece in Darwin’s theory. Using experimental and computational technologies that were heretofore unimagined, he has found that adaptations are not just driven by chance, but by a set of laws that allow nature to discover new molecules and mechanisms in a fraction of the time that random variation would take.

From a review (which is careful to note that it is not a religious argument):

The question “how does nature innovate?” often elicits a succinct but unsatisfying response – random mutations. Andreas Wagner first illustrates why random mutations alone cannot be the cause of innovations – the search space for innovations, be it at the level of genes, protein, or metabolic reactions is too large that makes the probability of stumbling upon all the innovations needed to make a little fly (let alone humans) too low to have occurred within the time span the universe has been around.

He then shows some of the fundamental hidden principles that can actually make innovations possible for natural selection to then select and preserve those innovations.

Like interacting parallel worlds, this would be momentous news if it is true. But someone is going to have to read the book and assess the strength of the laws advanced.

One thing for sure, if an establishment figure can safely write this kind of thing, Darwin’s theory is coming under more serious fire than ever. But we knew, of course, when Nature published an article on the growing dissent within the ranks about Darwinism.

In origin of life research, there has long been a law vs. chance controversy. For example, Does nature just “naturally” produce life? vs. Maybe if we throw enough models at the origin of life… some of them will stick?

Note: You may have to apprise your old schoolmarm that Darwin’s theory* is “natural selection acting on random mutations,” not “evolution” in general. It is the only theory that claims sheer randomness can lead to creativity, in conflict with information theory. See also: Being as Communion.

*(or neo-Darwinism, or whatever you call what the Darwin-in-the-schools lobby is promoting or Evolution Sunday is celebrating).*

Follow UD News at Twitter!

Comments
gpuccio, Thanks for your substantive responses. I haven't been able to review them in detail yet, as work has taken up too much time this week. Hopefully I'll be able to get back into this conversation tomorrow or the day after; I'll probably respond on your new thread, which seems on a quick review like a continuation of this one. Best, LH Learned Hand
MT: The same for you. Note, the issue is not that proteins once created can fold and function [and may need to be chaperoned to do the right thing cf prions and suspected causes of Alzheimer's etc . . . ], but that getting AA sequences that will then fold and function and coding for them in D/RNA is a super search challenge in AA sequence space much less the wider space of organic chemistry. Recall, codes, NC assembly machines and associated support systems all need to be explained, starting with Darwin's pond or the like. If Blind Watchmaker thesis thought is to rise from being in effect ideological just so stories dressed up in a lab coat and solemnly pronounced as incantations in suitably impressive tones. KF kairosfocus
D, welcome. You may find what I just put up here helpful, though I confess it is loaded with allusions across several disciplines of thought and years of exchanges in and around UD. KF kairosfocus
#691 kairosfocus
We don’t actually need to quantify to recognise, but we can quantify and the result is the quantification helps us see how hard it is for the atomic and temporal resources of the observed cosmos to arise beyond sparse search of very large config spaces implied by the possible arrangements of parts vs the tight configurational constraints implied by needs of interactive, specific functional organisation. KF
Thank you for the detailed explanation. Dionisio
MT, kindly read 695 above, you are misdirecting your focus and inadvertently are begging the crucial questions. G/night. KF kairosfocus
Video: http://channel.nationalgeographic.com/channel/videos/crop-circle-creation/ kairosfocus
KF @ 689
MT: In reality, we are dealing with AA chains where each position in A1-A2 . . . An can take 20 values, yielding for n, W = 20^N possibilities. With a shortish typical length of 250 AA’s, that’s 1.8 * 10^325 possibilities, whilst the sample window of the observed cosmos is like 10^111
The probabilities are huge but Protein folding depend on diverse folding mechanisms and are not just probabilistic. See mechanisms HERE KF @690
PS: BTW, that’s 250 degrees of freedom, or 250 dimensions
Then the volume of search space to be searched is even less Me_Think
MT, The design inference on FSCO/I requires joint application to a given aspect of a phenomenon, of functional specificity and sufficient complexity; regularities will be assigned to mechanical necessity, stochastically distributed and reasonably plausible contingencies to chance, and variations to both. As has been pointed out over and over, a general decoding algorithm is not a reasonable expectation, on theory of computation. Show your functional specificity and complexity beyond a reasonable threshold, and you are going to find good reason to infer design. And we can see examples on the matter here, where I am strongly inclined to suspect that natural crop disturbances will not show precise geometric figures with clean cut edges following precise mathematical loci for arcs of circles, straight lines and other similar curves [e.g. ellipses, spirals etc], or patterns consistent with the sort of artistic representations that became popular post Cubism etc, and which have become a part of the visual language of commercial logos and symbols. Nor will they follow the sorts of patterns that archaeologists have long called crop marks that point to archaeology underneath as opposed to natural. KF kairosfocus
MT: The problem is not the fraction of the space that are near neighbours, it is that if one starts in an arbitrary location, one embarking on a random walk or a dynamic stochastic walk is likely indeed to be remote from any island of function and so is nowhere able to get a functional signal to reinforce success. Actually, in a sense, that ever falling fraction of the possibilities space accessible in a random direction step of Hamming distance radius r, is exactly the problem, it means that if you are in the deep bulk of non-functional possibilities and are forced to be in a sparse search by steps of a very large space (due to the constraint of atomic and temporal resources in the sol system or the observed cosmos) you are vastly unlikely ever to find an island of function by such processes. (Try, Darwin's pond of salts or the like as a start, the OOL root of the tree of life that so many evo mat thinkers are so reluctant to start from.) The old story on hill climbing based on increments up a slope imply being present on an island of function. This is the big problem with all those evolutionary computing models that are so often trotted out, they are misdirected to the wrong problem and beg the real issue at stake.. Recall, the problem is that functional complex interactive organisation confines one to a narrow range of possible configs in the wider space, the island of function effect. Just think, a bag of parts for the ABU 6500 3c shaken up, vs assembled in accord with its exploded view wiring diagram. To get the next level of it, imagine the bag now has parts for similar but incompatible reels added in and is shaken some more --- even arranging for the right part to be in the right place at the right time can be a big, information rich problem. Predictably, the low information approach will fail, the high information one is likely to work but is a design approach. Precisely, because of the difference between blind contingency driven by chance and necessity, and intelligently directed choice contingency . . . design. Which BTW is one reason why we routinely intuitively recognise designs as configs that are readily seen as only likely on design. KF kairosfocus
In short there is a design filter explanatory process applied and it differentiates those that seem to come about by chance and necessity from those that show signs of being designed, the overwhelming majority. Just what is that filtering process, the Wikipedia are rather reluctant to admit.
It is easy to distinguish a Natural crop circle from a man-made one. Natural crop circles have bent stalk (unlike Man made which shows signs of breakage), Natural Crop circle shows star burst pattern in the stalk, the area around the natural crop circle shows higher EM readings. Ball lighting phenomenon is observed in natural Crop circles, Stalk strength measurement shows weakness. The point is dFSCI would not be able to distinguish Natural crop circle from similar man made crop circles. You need to make stalk level observations. Me_Think
Joe, courtesy wireframe analysis, we can use bit based info metrics to deal with the organised complexity of some things, and witht he just plain complexity of others, but the problem is as noted in my recent post on FSCO/I, mech counts easily run to 5 - 7 figures, and the work involved takes up far too much time for a bog type post. Besides, it is routinely done in technical drawing work or computer animation work. Part of how we are literally surrounded by cases in point of FSCO/I . . . even our clothing and the PCs we type on as well as glasses etc are cases in point -- and the routinely empirically known source of FSCO/I. Where, ever since Paley was strawmannised by ignoring his moving beyond the ordinary watch to a thought exercise of a self replicating one, there as been a serious point on how the additional-ity of self replication gives further reason to infer design, as that too is FSCO/I. I am beginning to make a cultural diagnosis of resistance to threatening change, based on denial driven by selective hyperskepticism. It's beginning to sound like a strategic change gap analysis with dominant groups vested in a status quo and the need to empower the marginalised, intimidated and apathetic to demand their right to sit at the table and to refuse to take dismissals or abuses as acceptable responses . . . indeed to see them as showing a dominant cluster of factions being willing to be abusive to sustain its power and apparent legitimacy. Where of course the Marxists were the classic case in point of abusive nihilistic rebels who when they gained power continued in their bad ways, with horrific consequences. KF kairosfocus
uh oh, sorry on italicisation kairosfocus
D, As I recall, crop circles -- and they started out as that, circles in fields -- came to significant attention in the early 80's, and speculations about where they came from and what they meant arose. There were speculation about Electric fields and whether oddities, etc but they pretty much vanished when lines and shapes like stars or elaborate evidently composed patterns like logos began to appear. At some point a group came up and confessed. They function as artistic drawings on a grand scale much like Nazca lines and various public monuments or murals, e.g. try horses in chalk rock outline on English countryside hills etc. Such things are amenable to nodes-arcs analysis and wireframe meshes -- cf my recent post here which has illustrative images tied to the ABU 6500 C3 mag reel [& DV I will do something on the morrow, though I have an urgent letter cropping up . . . ], and can be seen as manifesting functionally specific, complex organisation and associated information, FSCO/I. BTW, one of the astonishing things about the resistance to evidence that we can see above and elsewhere is that we live in a world of technology and are literally surrounded by examples of FSCO/I and its routinely known and generally recognised -- save where it seems to be very inconvenient to a certain dominant evolutionary materialist ideology that likes to dress up in a lab coat and call itself science -- origin in design as causal process. We don't actually need to quantify to recognise, but we can quantify and the result is the quantification helps us see how hard it is for the atomic and temporal resources of the observed cosmos to arise beyond sparse search of very large config spaces implied by the possible arrangements of parts vs the tight configurational constraints implied by needs of interactive, specific functional organisation. KF PS: WIKI
A crop circle is a sizable pattern created by the flattening of a crop such as wheat, barley, rye, maize, or rapeseed. Crop circles are also referred to as crop formations because they are not always circular in shape. The documented cases have substantially increased from the 1970s to current times, and many self-styled experts alleged an alien origin. However, in 1991, two hoaxers, Bower and Chorley, claimed authorship of many circles throughout England, after one of their circles was certified as impossible to be made by a man by a notable circle investigator in front of journalists.[1] Circles in the United Kingdom are not spread randomly across the landscape, but they appear near roads, areas of medium to dense population, and cultural heritage monuments, such as Stonehenge or Avebury, and always in areas of easy access.[2] Archeological remains can cause cropmarks in the fields in the shapes of circles and squares, but they do not appear overnight, and they are always in the same places every year. The scientific consensus is that most or all crop circles are man-made, with a few possible exceptions due to meteorological or other natural phenomena. . . .
In short there is a design filter explanatory process applied and it differentiates those that seem to come about by chance and necessity from those that show signs of being designed, the overwhelming majority. Just what is that filtering process, the Wikipedia are rather reluctant to admit. No prizes for guessing where this all points.
kairosfocus
PS: BTW, that's 250 degrees of freedom, or 250 dimensions. kairosfocus
MT: In reality, we are dealing with AA chains where each position in A1-A2 . . . An can take 20 values, yielding for n, W = 20^N possibilities. With a shortish typical length of 250 AA's, that's 1.8 * 10^325 possibilities, whilst the sample window of the observed cosmos is like 10^111. With protein fold domains scarce. And the search is via variations of genes filtered through generations of differential reproduction. KF kairosfocus
KF, Since our interlocutors don't want to answer my questions in posts 675 and 676 (apparently they don't want to answer most of my questions anywhere - maybe my questions are too simple for their elevated intellectual level?), can you please clarify this whole thing about the crop circles for me? I'm not familiar with this "crop circles" stuff, but wanted to know if they have any known functionality? Do the FSCO/I and dFSCI concepts apply to nonfunctional objects or systems? Thank you. Dionisio
KF @ 680
PS: The issue is to find a coding that generates a physically remote folding-functioning protein AA sequence with sparse search relative to scope of space
The search space reduces dramatically at hyperbolic dimension. Imagine a solution circle (the circle within which solution exists) of 10 cm inside a 100 cm square search space. The area which needs to be serached for solution is pi x 10 ^2 = 314.15 The total Search area is 100 x 100 = 10000. The % area to be searched is (314.15/10000) x 100 = 3.14% In 3 dimesions,the search area will be 4/3xpix10^3 Area to search is now cude (because of 3 dimensions) = 100^3. Thus the % of area to be searched falls to just 4188.79/100^3 = 0.41 % only. However Protiens are in hyperdimensions, so Hypervolume of sphere with dimension d and radius r will be (Pi^d/2 x r^d)/r(d/2+1) HyperVolume of Cube = r^d At 10 dimensions, the volume to search reduces to just: 0.000015608 % It is not as difficult as envisioned. Me_Think
Reality:
Will you please be kind enough to name a variety of things that are or are not “readily amendable into bits”
Things that are: DNA; RNA; polypeptides; computer programs; text Things that are not: Rock formations; baseballs; paintings
to name, describe, and demonstrate those “other design detection tools”
All that are in use today. Or do you think archaeologists flip coins to determine design? Perhaps that is also how forensic scientists determine if a crime was committed? Fire was an arson because the coin landed head's up. We use our knowledge of cause and effect relationships. We also look for signs of work and counterflow.
to substantiate and demonstrate the existence of and use of dFSCI in detecting design?
With respect to biology dFSCI is what Crick defined as biological information. It exists in all biological organisms. It cannot be accounted for via purely materialistic processes and it matches the positive criteria for intelligent design. That said if someone could step forward and demonstrate that biological information can arise without A) existing biological information and B) via physics/ chemistry, we won't be able to use the existence of biological information as an indication of intelligent design. ID is based on three premises and the inference that follows (DeWolf et al., "Darwinism, Design and Public Education", pg. 92):
1) High information content (or specified complexity) and irreducible complexity constitute strong indicators or hallmarks of (past) intelligent design. 2) Biological systems have a high information content (or specified complexity) and utilize subsystems that manifest irreducible complexity. 3) Naturalistic mechanisms or undirected causes do not suffice to explain the origin of information (specified complexity) or irreducible complexity. 4) Therefore, intelligent design constitutes the best explanations for the origin of information and irreducible complexity in biological systems.
The people who say that "Naturalistic mechanisms or undirected causes suffice to explain the origin of information (specified complexity) or irreducible complexity", have all of the power. They don't need to attack ID and by attacking ID they are admitting that point 3 is correct. OK so when dFSCI is applied to living organisms we get a positive detection of intelligent design. And guess what? There isn't even an alternative testable hypothesis. So we don't care if you attack our concepts as you definitely don't have anything to offer to compare to. Joe
Reality, with all due respect, empty assertions, in effect our side has some hyperskeptical dismissive talking points so there. Go, look at some FSCO/I rich entities and then come back and tell us it is not really there, why. And explain to us how say tossing in a bag and juggling parts of an ABu 6500 reel will get it to work, as opposed to the wiring diagram guided assembly. And so forth. KF kairosfocus
Joe said: "Also I wouldn’t use dFSCI to determine design of something that isn’t readily amendable into bits. There are other design detection tools that are better in some situations." Will you please be kind enough to name a variety of things that are or are not "readily amendable into bits" and to name, describe, and demonstrate those "other design detection tools" and to substantiate and demonstrate the existence of and use of dFSCI in detecting design? Please stick to things that are not already known to be designed. Reality
MT: I rather doubt it; the only way I am aware of to get electrical and magnetic fields into perfect five pointed star patterns etc would involve intelligent direction, and would involve equipment that if at ground level would severely disrupt the fields, if in the air the means of elevation would likely have the same effect.
A very interesting analysis of Crop Circles can be found here (Please see video): http://realitysandwich.com/18277/secrets_crop_circles/ Me_Think
kairosfocus, endlessly repeating your FSCO/I claims does nothing to substantiate them, and these claims of yours; "...the FSCO/I concept is not my creation nor that of design thinkers. It is a commonplace in Engineering though it may not be put in those words, and as you can see it traces to work by Orgel and Wicken...", are just more of your non-scientific, underhanded attempts to legitimize your FSCO/I claims. Your "islands of function" claims are also based on false premises, which has been pointed out in refutations of your claims that you are or should be fully aware of. Reality
Me Think:
All Natural crop circles are formed by Electromagnetic force and weak paddy stem – leading to folding of the stalk which forms circle design.
Evidence please. Joe
PS: The issue is to find a coding that generates a physically remote folding-functioning protein AA sequence with sparse search relative to scope of space, cf the recent Axe paper discussed here on the w/end. Okay. kairosfocus
MT: I rather doubt it; the only way I am aware of to get electrical and magnetic fields into perfect five pointed star patterns etc would involve intelligent direction, and would involve equipment that if at ground level would severely disrupt the fields, if in the air the means of elevation would likely have the same effect. And airships are notoriously bulky and hard to control in winds, to hover. Naturally occurring B and E fields do not take on geometries likely to do that which you ascribe. Can you show an experiment or natural-world observation of cause in process, that did that, and generated stars, successive circles and so forth in a compositional pattern etc? The issue here is not calculation of components and odds but empirical demonstration of ability to generate complex patterns fitting artistic compositional patterns in these cases. KF kairosfocus
Joe @ 672 and kairosfocus @ 673
I don’t see any natural crop circles there. Also I wouldn’t use dFSCI to determine design of something that isn’t readily amendable into bits.
When it comes to crop circles as a suggested case, find us one showing FSCO/I that reasonably came about by blind chance and mechanical necessity . . . ironically, this is a case where a design inference is routinely made on FSCO/I.
All Natural crop circles are formed by Electromagnetic force and weak paddy stem - leading to folding of the stalk which forms circle design. Minimum inventory/Maximum diversity system (refer: Peter Pearec)ensures elaborate modular designs from just few design component, so what looks complicated is quite easily achievable by natural forces. Similarly, IMHO, Protein folding mechanisms are diverse and need not be by ID agent. Unless dFSCI takes the alternate methods into consideration and calculates the probability of those methods to be zero, it can't infer that protein folding is guided . Some natrural mechanisms for protein foldings are Here Me_Think
IBM had a motto on every office desk many years ago: THINK Dionisio
#671 Me_Think
You can check Crop Circle images from various sources. One of the great source is : Crop Circle Images I am curious how dFSCI calcuation would be able to detect which one of them is Natural and which is man made.
As far as it is known by now, What's the function of those crop circles? What do they do? What happens if there are too many or too little of them? What if there are none? What does the F in dFSCI stand for? What about the other letters in that acronym? Are those concepts related to the crop circles in any way? How? Sorry for asking these childish common sense questions. Please, be considerate and don't laugh at me. :) Dionisio
#671 Me_Think
You can check Crop Circle images from various sources. One of the great source is : Crop Circle Images I am curious how dFSCI calcuation would be able to detect which one of them is Natural and which is man made.
Is there any functional specified complex information associated with those crop circles? Dionisio
PS: The circles can be analysed via nodes and arcs meshes in a wireframe, i.e. FSCO/I, which is reducible to bits.I would scan and vectorise. Stars, circles, patterns of circles etc all looking a lot like commercial art-inspired drawings. kairosfocus
MT: The issue is to recognise reliably, cases of design on a relevant sign. There are trillions of cases [web etc pages are over the trillion mark, add screws, nuts, bolts etc and other things] of FSCO/I of directly known cause, all design, and nil of credible cases of FSCO/I of directly known cause not by intelligently directed configuration. This is a mark of inductive reliability, one backed up by the sparse search to config space with isolated islands of function analysis. When we turn to the root of the tree of life, OOL, we see a case of multiple FSCO/I involving gated, encapsulated integrated metabolic processes in an astonishing network, with numerically controlled protein synthesis in an automaton using codes and involving a von Neumann self replicator facility. This, to be assembled by sparse search on observed cosmos resources in a Darwin's pond or the like, backed by actual observational evidence. And the same sort of challenge proceeds beyond that point. The only vera causa backed means of getting the required FSCO/I is design, and indeed that is why Shapiro's metabolism first and Orgel's Genes first approaches came to mutual ruin. And if (per the OP) the laws of physics and Chemistry programmed life -- on what observations? -- that would point to an astonishing fine tuning of the cosmos above and beyond what is already on the table. When it comes to crop circles as a suggested case, find us one showing FSCO/I that reasonably came about by blind chance and mechanical necessity . . . ironically, this is a case where a design inference is routinely made on FSCO/I. Here, my emphasis -- and hi Joe welcome back -- is that in science explanations should be empirically controlled and the causal adequacy of suggested explanations needs to be shown on the ground not assumed for argument. KF kairosfocus
I don't see any natural crop circles there. Also I wouldn't use dFSCI to determine design of something that isn't readily amendable into bits. There are other design detection tools that are better in some situations. Joe
Joe @ 670 You can check Crop Circle images from various sources. One of the great source is : Crop Circle Images I am curious how dFSCI calcuation would be able to detect which one of them is Natural and which is man made. Me_Think
Me_Think- Please produce an example of a "natural crop circle". Thank you. Joe
DNA jock- what is it? What is "H"? It is up to you and yours to provide it and yours is the position which relies on necessity and chance. Joe
Hi Joe, Glad to see you're back. [Checks punctuation. Grins] Actually H is ALL chance hypotheses; Winston Ewert makes the hilarious mistake of considering them sequentially, rather than as a whole. If you, or anyone else, wants to calculate p(T|H) you are the ones who need to delineate H (and T) in sufficient detail to do the calculations. Still waiting... DNA_Jock
kairosfocus and gpuccio Thank you very much for elaborate explanation. However, I am not convinced that dFSCI and related method can help identify design. Eg. Let's take two similar Crop circles - one Natural and one man made. Both would exhibit the same pattern and both would thus have the same dFSCI. How would you know which was man made based on dFCSI calculations? Me_Think
OK DNA Jock- please provide the relevant chance hypotheses. I say that you can't. "H" should be given a big fat 0 yet we give it the benefit of the doubt by granting it something greater than 0. Then our critics cry foul when in fact it is our critics who cannot provide H and it is our critics who need to do so. Joe
Kf @ 663 I agree with you that uniformitarianism is appropriate. Your inference that I do not apply it is incorrect, and appears to stem from a failure to understand the effects of selection over time. Simply put, extant proteins are optimized. Any sane person would expect the degree of substitution allowed to differ from the DoSA in a non-optimized protein, under uniformitarian principles. As I said to gpuccio in #544:
Bottom up studies like Keefe’s are the only way to explore the frequency of “the shores of the islands of function” in protein space (that I have heard of). Studies like McLauglin explore the degree to which functional protein space is interconnected via single steps near an optimum. Durston asks “how broad is the peak?”, a question of secondary relevance, at best. Axe doesn’t explore anything; the paper is based on a glaring fallacy. See my attempt to explain this, inter alia, to Mung. WordPress is mangling my attempts to provide you with a linkout. please enter “http://theskepticalzone.com/wp/?p=1472&cpage=7#comment-19065? in your browser. Dr. Axe is represented by “Dr. A” — I’m a subtle guy. (Off-topic: [snip].) There is not any inconsistency between, to use your terms, the forward data and the reverse data: Keefe’s forward data are compatible with McLauglin’s and Durston’s reverse data. You may be mis-understanding Durston’s data. Axe himself is mis-understanding his own data. [Emphasis in original. I am referring to Gauger and Axe's "The Evolutionary Accessibility of New Enzymes Functions: A Case Study from the Biotin Pathway"]
Any comments about FSCO/I are moot until you find a way to calculate p(T|H). DNA_Jock
GP, I should note that a 3-d nodes arcs mesh is analogue, but can be reduced to a structured string of y/n q's that specify components, orientations, couplings and more, all of course targetting function. But we should note there are many ways to clump parts, but only a relative few will function correctly. The discussion on coded strings, with understanding that codes have contexts, is WLOG. KF kairosfocus
DJ: I am disappointed. It seems to me that the uniformity principle espoused by Newton and extended to origins studies by Lyell and Darwin alike, emphasised that that which we may not directly inspect ought to be explained on forces and factors we observe as acting here and now capable of sufficiently similar effect? Failing to provide a true cause with adequate capability known to be able to produce the effect and imposing ideologically loaded redefinitions of science and its methods to lock out causes that are adequate seems to me to be an abuse of the name of science and science education. In this case, all that is needed is to show that with reasonable likelihood and observability, FSCO/I can and is observed to be caused by blind chance and mechanical necessity. Am I therefore to interpret your remarks attacking imaginary YECs not present to defend themselves, as a backhanded admission that you do not have adequate blind watchmaker cause in hand? That would be a very interesting implication as FSCO/I is routinely and reliably produced by intelligently directed contingency, the very context in which design thinkers infer inductively that it is a strong and reliable sign of design, one backed up by what Axe speaks of as the sparse search challenge. KF PS: The abusive appellation, IDiots is a stock in trade of too many anti-design trolls on various attack sites. I point it out as an example of unfortunately typical schoolyard level contempt-laced namecalling. This, you know or should know. kairosfocus
Reality, sorry but while I am guilty of the crime of creating summary string, the FSCO/I concept is not my creation nor that of design thinkers. It is a commonplace in Engineering though it may not be put in those words, and as you can see it traces to work by Orgel and Wicken, major OOL researchers, in the 1970s. As for your rather familiar personalities and excuses for dodging providing empirical warrant for the Darwinist tree of life from the root up, all I have done for two years is take the Smithsonian presentation of the tree, point to root and branches and say, here is an open invitation to do a feature-length article that I will personally host here at UD that will empirically ground the claimed causal forces and process held to account for the world of life by evolutionary materialists and those who go along with them to one extent or another; without loading up on a priori assumptions and impositions that beg big questions. Perhaps, you would be so kind as to provide an answer that is more satisfactory than what Wiki provides, or Theobald -- fact, fact, FACT -- provides, or the composite based on discussion comments by EL and Jerad after a year of trying, a year ago? Remember, if you can provide a coherent narrative that actually makes the case it blows up design theory as applied to the world of life, answering also the intelligent evolution view espoused by co-founder of the theory, Wallace. The invitation stands open and there is no need for personalities if an answer on the merits can do the job. You may include any number of links and multimedia inclusions to Youtube etc, as well as images but need to provide a coherent, feature article length account. if you or your fellows cannot, but resort to personalities, that reeks of the familiar tactic of shooting at the messenger. KF kairosfocus
Me_Think at #641: "Is FSCO/I = dFSCI = FSC = CSI ?" I will try to help. CSI is the general concept. Complex Specified Information. Even if there are different approaches to the formal definition of specification, CSI mean the information needed for a particular specification. I have adopted a very simple definition of specification: Specification = any explicit rule which generates a binary partition in a search space. Given a specification, the complexity linked to it is the ration between the target space (all the objects which satisfy the specification) and the search space, expresses as -log2. Functional Specification is a subset of all possible specifications, where the rule which specifies is a function, defined explicitly and which can be assessed by an explicit procedure in any object of the search space, as present or absent, so that a binary partition is generated. In Functional Specification, we can accept any possible function definition for any object, and also different definitions for the same object. In any case, the complexity is computed for each specific definition. FSC means Functionally Specified Complexity: it is the complexity linked to a functional specification. The complexity is computed in the same way as for any generic specification, as said above. dFSI means digital Functionally Specified Information. It is a term that I have introduced to name a further subset of functional information, where we consider only digital sequences in objects. This is useful because the mathematical treatment is easier, and it is appropriate to our purposes, because biological information is mainly digital. Information and Complexity, in the ID context, mean the same thing (for all practical purposes, -log2 of the probability of the target space in a random search). dFSCI is the binary form of dFSI: it is derived from the numeric value by using an appropriate threshold, which must be chosen according to the system and time span we are considering. Any object of the search space which exhibits dFSI above the chosen threshold is said to exhibit dFSCI and is a candidate for the design inference, if there is no reasonable evidence that it can be explained algorithmically in the system. Finally, FSCO/I is the term used by KF. To be correct, I will leave the explanation to him, but I think it is essentially the same thing as my FSCI, without the digital restriction. As KF has explained many times, the digital restriction is without loss of generality, because any analog information can be converted to a digital form. May choice to discuss only digital information is purely methodological: it is simpler to do that. Fit is the term used in the Durston paper to mean the bits linked to the functional restraint in proteins. It is the same as saying "bits of CSI". I hope that can help. As you can see, the concept is rather the same, and the units are the same. There are different subsets of the same concept, that's all. gpuccio
F/N: Pardon some not cleaned up chunking of clips from Sect A, with Durston et al 2007 (I hate how WP usually murders unusual symbols): ______________ >> 11 --> Durston, Chiu, Abel and Trevors [--> 2007] provide a third metric, the Functional H-metric [--> Shannon's H is avg info per symbol] in functional bits or fits, a functional bit extension of Shannon's H-metric of average information per symbol, here. The way the Durston et al metric works by extending Shannon's H-metric of the average info per symbol to study null, ground and functional states of a protein's AA linear sequence -- illustrating and providing a metric for the difference between order, randomness and functional sequences discussed by Abel and Trevors -- can be seen from an excerpt of the just linked paper. Pardon length and highlights, for clarity in an instructional context: Abel and Trevors have delineated three qualitative aspects of linear digital sequence complexity [2,3], Random Sequence Complexity (RSC), Ordered Sequence Complexity (OSC) and Functional Sequence Complexity (FSC). RSC corresponds to stochastic ensembles with minimal physicochemical bias and little or no tendency toward functional free-energy binding. OSC is usually patterned either by the natural regularities described by physical laws or by statistically weighted means. For example, a physico-chemical self-ordering tendency creates redundant patterns such as highly-patterned polysaccharides and the polyadenosines adsorbed onto montmorillonite [4]. Repeating motifs, with or without biofunction, result in observed OSC in nucleic acid sequences. The redundancy in OSC can, in principle, be compressed by an algorithm shorter than the sequence itself. As Abel and Trevors have pointed out, neither RSC nor OSC, or any combination of the two, is sufficient to describe the functional complexity observed in living organisms, for neither includes the additional dimension of functionality, which is essential for life [5]. FSC includes the dimension of functionality [2,3]. Szostak [6] argued that neither Shannon's original measure of uncertainty [7] nor the measure of algorithmic complexity [8] are sufficient. Shannon's classical information theory does not consider the meaning, or function, of a message. Algorithmic complexity fails to account for the observation that 'different molecular structures may be functionally equivalent'. For this reason, Szostak suggested that a new measure of information–functional information–is required [6] . . . . Shannon uncertainty, however, can be extended to measure the joint variable (X, F), where X represents the variability of data, and F functionality. This explicitly incorporates empirical knowledge of metabolic function into the measure that is usually important for evaluating sequence complexity. This measure of both the observed data and a conceptual variable of function jointly can be called Functional Uncertainty (Hf) [17], and is defined by the equation: H(Xf(t)) = -?P(Xf(t)) logP(Xf(t)) . . . (1) where Xf denotes the conditional variable of the given sequence data (X) on the described biological function f which is an outcome of the variable (F). For example, a set of 2,442 aligned sequences of proteins belonging to the ubiquitin protein family (used in the experiment later) can be assumed to satisfy the same specified function f, where f might represent the known 3-D structure of the ubiquitin protein family, or some other function common to ubiquitin. The entire set of aligned sequences that satisfies that function, therefore, constitutes the outcomes of Xf. Here, functionality relates to the whole protein family which can be inputted from a database . . . . In our approach, we leave the specific defined meaning of functionality as an input to the application, in reference to the whole sequence family. It may represent a particular domain, or the whole protein structure, or any specified function with respect to the cell. Mathematically, it is defined precisely as an outcome of a discrete-valued variable, denoted as F={f}. The set of outcomes can be thought of as specified biological states. They are presumed non-overlapping, but can be extended to be fuzzy elements . . . Biological function is mostly, though not entirely determined by the organism's genetic instructions [24-26]. The function could theoretically arise stochastically through mutational changes coupled with selection pressure, or through human experimenter involvement [13-15] . . . . The ground state g (an outcome of F) of a system is the state of presumed highest uncertainty (not necessarily equally probable) permitted by the constraints of the physical system, when no specified biological function is required or present. Certain physical systems may constrain the number of options in the ground state so that not all possible sequences are equally probable [27]. An example of a highly constrained ground state resulting in a highly ordered sequence occurs when the phosphorimidazolide of adenosine is added daily to a decameric primer bound to montmorillonite clay, producing a perfectly ordered, 50-mer sequence of polyadenosine [3]. In this case, the ground state permits only one single possible sequence . . . . The null state, a possible outcome of F denoted as ø, is defined here as a special case of the ground state of highest uncertainly when the physical system imposes no constraints at all, resulting in the equi-probability of all possible sequences or options. Such sequencing has been called "dynamically inert, dynamically decoupled, or dynamically incoherent" [28,29]. For example, the ground state of a 300 amino acid protein family can be represented by a completely random 300 amino acid sequence where functional constraints have been loosened such that any of the 20 amino acids will suffice at any of the 300 sites. From Eqn. (1) the functional uncertainty of the null state is represented as H(Xø(ti))= - ?P(Xø(ti)) log P(Xø(ti)) . . . (3) where (Xø(ti)) is the conditional variable for all possible equiprobable sequences. Consider the number of all possible sequences is denoted by W. Letting the length of each sequence be denoted by N and the number of possible options at each site in the sequence be denoted by m, W = mN. For example, for a protein of length N = 257 and assuming that the number of possible options at each site is m = 20, W = 20^257. Since, for the null state, we are requiring that there are no constraints and all possible sequences are equally probable, P(Xø(ti)) = 1/W and H(Xø(ti))= - ?(1/W) log (1/W) = log W . . . (4) The change in functional uncertainty from the null state is, therefore, ?H(Xø(ti), Xf(tj)) = log (W) - H(Xf(ti)). (5) . . . . The measure of Functional Sequence Complexity, denoted as ?, is defined as the change in functional uncertainty from the ground state H(Xg(ti)) to the functional state H(Xf(ti)), or ? = ?H (Xg(ti), Xf(tj)) . . . (6) The resulting unit of measure is defined on the joint data and functionality variable, which we call Fits (or Functional bits). The unit Fit thus defined is related to the intuitive concept of functional information, including genetic instruction and, thus, provides an important distinction between functional information and Shannon information [6,32]. Eqn. (6) describes a measure to calculate the functional information of the whole molecule, that is, with respect to the functionality of the protein considered. The functionality of the protein can be known and is consistent with the whole protein family, given as inputs from the database. However, the functionality of a sub-sequence or particular sites of a molecule can be substantially different [12]. The functionality of a sub-molecule, though clearly extremely important, has to be identified and discovered . . . . To avoid the complication of considering functionality at the sub-molecular level, we crudely assume that each site in a molecule, when calculated to have a high measure of FSC, correlates with the functionality of the whole molecule. The measure of FSC of the whole molecule, is then the total sum of the measured FSC for each site in the aligned sequences. Consider that there are usually only 20 different amino acids possible per site for proteins, Eqn. (6) can be used to calculate a maximum Fit value/protein amino acid site of 4.32 Fits/site [NB: Log2 (20) = 4.32]. We use the formula log (20) - H(Xf) to calculate the functional information at a site specified by the variable Xf such that Xf corresponds to the aligned amino acids of each sequence with the same molecular function f. The measured FSC for the whole protein is then calculated as the summation of that for all aligned sites. The number of Fits quantifies the degree of algorithmic challenge, in terms of probability, in achieving needed metabolic function. For example, if we find that the Ribosomal S12 protein family has a Fit value of 379, we can use the equations presented thus far to predict that there are about 10^49 different 121-residue sequences that could fall into the Ribsomal S12 family of proteins, resulting in an evolutionary search target of approximately 10^-106 percent of 121-residue sequence space. In general, the higher the Fit value, the more functional information is required to encode the particular function in order to find it in sequence space. A high Fit value for individual sites within a protein indicates sites that require a high degree of functional information. High Fit values may also point to the key structural or binding sites within the overall 3-D structure. 11 --> Thus, we here see an elaboration, in the peer reviewed literature, of the concepts of Functionally Specific, Complex Information [FSCI] (and related, broader specified complexity) that were first introduced by Orgel and Wicken in the 1970's. This metric gives us a way to compare the fraction of residue space that is used by identified islands of function, and so validates the islands of function in a wider configuration space concept. So, we can profitably go on to address the issue of how plausible it is for a stochastic search mechanism to find such islands of function on essentially random walks and trial and error without foresight of location or functional possibilities. We already know that intelligent agents routinely create entities on islands of function based on foresight, purpose, imagination, skill, knowledge and design. 12 --> Such entities typically exhibit FSCI, as Wicken describes: ‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems. Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms[i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65. (Emphases and notes added. Nb: “originally” is added to highlight that for self-replicating systems, the blue print can be built-in.)] 13 --> The Wicken wiring diagram is actually a very useful general concept. Strings of elements -- e.g. S-T-R-I-N-G -- are of course a linear form of the nodes, arcs and interfaces pattern that is common in complex structures. Indeed, even the specification of controlled points and a "wire mesh" that joins them then is faceted over in digital three-dimensional image modelling and drawing, is an application of this principle. Likewise, the flow network or the flowchart or blocks and arrows diagrams common in instrumentation, control, chemical engineering and software design are another application. So is the classic exploded view used to guide assembly of complex machinery. All such can be reduced to combinations of strings that specify nodes, interfaces and interconnecting relationships. From this set of strings, we can get a quantitative estimate of the functionally specific complex information embedded in such a network, and can thus estimate the impact of random changes to the components on functionality. This allows clear identification and even estimating the scope of Islands of Function in wider configuration spaces, through a Monte Carlo type sampling of the impacts of random variation on known functional configurations. (NB: If we add in a hill climbing subroutine, this is now a case of a genetic algorithm. Of course the scope of resources available limits the scope of such a search, and so we know that such an approach cannot credibly initially find such islands of function from arbitrary initial points once the space is large enough. 1,000 bits of space is about 1.07 * 10^301 possibilities, and that is ten times the square of the number of Planck-time states for the 10^80 or so atoms in our observed cosmos. That is why genetic type algorithms can model micro-evolution but not body-plan origination level macro-evolution, which credibly requires of the order of 100,000+ bits for first life and 10,000,000+ for the origin of the dozens of main body plans. So far, also, the range of novel functional information "found" by such algorithms navigating fitness landscapes within islands of function -- intelligently specified, BTW -- seems (from the case of ev) to have peaked at less than 300 bits. HT, PAV.) 14 --> Indeed, the use of the observed variability of AA sequences in biological systems by Durston et al, is precisely an example of this an entity that is naturally based on strings that then fold to the actual functional protein shapes. >> _____________ Hope this further helps. Fireman duties start early today, so later. KF kairosfocus
Me_Think at #644: "gpuccio explained that dFSCI doesn’t detect design, only confirms if a design is real design or apparent design." I don't understand what you mean. dFSCI is essential to distinguish between true design and apparent design, therefore it is an essential part of scientific design detection. If you are not able to distinguish between true design and apparent design, you are making no design detection you are only making recognition of the appearance of design, which is not a scientific procedure because it has a lot of false positives and a lot of false negatives. So, just recognition of the appearance of design is not scientific design detection. On the contrary, dFSCI eliminates the false positives, and design detection becomes a scientific reality. Therefore, dFSCI is an essential part of scientific design detection. Surely you can understand such a simple concept, can you? gpuccio
PPS: To understand things connected to information concepts please read Section A my always linked through my name on every comment I have made at UD; which has a boiled down 101 (based ultimately on what I used to teach T/comms students . . . and yes that is my version on Shannon's Diagram though IIRC I found a similar thing in Duncan's A Level Physics. My context is this feeds into the layercake style T/comms protocol models that are now common). The First appendix on thermodynamics matters will also help. PPPS: GP is a busy Physician, I am mostly doing policy analysis stuff in support of a recent change of govt here, with a fair amount of fireman on tap to go with it, rejoice in insomnia power for stuff you see here at UD. kairosfocus
PS: Closed comments FYIs are based on a need to provide reference, supplementary info, to headline things easy to overlook, to give graphics and/or vids, and the situation of a problem with what has to be called internet vandalism or trollish behaviour. They normally are linked to live threads of discussion. kairosfocus
F/N: I note:
KF has closed comment threads about dFSCI and gpuccio explained that dFSCI doesn’t detect design, only confirms if a design is real design or apparent design. Joe refers to FSC paper which discusses fit and not bit, so it would be helpful if someone can give the relationship between all those ID units to detect/confirm design .
1 --> Functional specificity based on organisation and coupling of interacting parts to yield relevant function (e.g. info storage/ transmission/ control of NC process, operation of a multi-part entity based on a 3-d nodes & arcs wiring diagram, etc) is a relevant, observable form of specification. It is also a commonplace, ranging from text in English to source and object programs to fishing reels, pens, gear trains, machine tools, pc mother boards, oil refineries, ribosomes and wider protein synthesis, the D/RNA molecules, proteins and esp. enzymes. 2 --> Due to the interactive coupling based on particular configurations there is a lock-down to relatively few of the many possible clumped or scattered possibilities for the same parts/components etc. Thus, islands of function. 3 --> Digitally coded functionally specific complex information [dFSCI] is a subset of functionally specific complex organisation and associated information [FSCO/I], using a string data structure to store coded info, cf punched paper tape, magnetic tape, memory registers in a PC and R/DNA. In turn, FSCO/I or even just FSCI, is a subset of specified complexity or complex specified information, emphasising the biologically and technologically most relevant form of specification, function depending on properly arranged and coupled interacting parts. Both CSI and FSCO/I trace to the concepts put on the table across the 1970's by Orgel and Wicken. Let me cite them:
ORGEL, 1973: . . . In brief, living organisms are distinguished by their specified complexity [--> AmHD: Consisting of interconnected or interwoven parts; composite. the living organisms context points to biofunctions dependent on specific arrangements of interconnected, interacting parts]. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity. [The Origins of Life (John Wiley, 1973), p. 189. [--> later attempts to quantify and generate metric models build on this] ] WICKEN, 1979: ‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems. Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65. (Emphases and notes added. Nb: “originally” is added to highlight that for self-replicating systems, the blue print can be built-in.)]
4 --> Observation and recognition of FSCO/I and its significance do not depend on the particulars of any one metric model, but we routinely measure it based on the length of the string of y/n q's required to specify a given state in the field of possibilities, cf the details mode display of a folder on a PC, though note bytes are 8-bit clusters and k's are based on 2^10 = 1024. 5 --> Your understanding of GP is flawed, dFSCI is an example of a tested empirically reliable sign pointing strongly to design. 6 --> Functional sequence complexity -- FSC, as discussed by Durston, Abel, Trevors etc, has to do with FSCO/I. Fits means functional, binary digits, i.e. functionally specific bits. It is given, eg in the Durston et al 2007 paper. 7 --> In all these cases, once we are beyond a reasonable threshold of complexity, functionally specific, organised complexity is maximally implausible to have come about by sparse blind chance and mechanical necessity -- Blind Watchmaker thesis -- search of an abstract space of possible configurations. This last is essentially a cut-down phase space used in Mathematics, Physics, Chemistry, Thermodynamics and Engineering, we are not looking at momentum. This is the context of the metaphor, searching for islands of function in vast seas of possible configs dominated by non-function. (Think about parts for the fishing reel, and BTW, the conventional/baitcasting reel IIRC was originally developed by watchmakers.) 8 --> This is as opposed to incremental climbing of fitness sloped typically viewed as hills of reasonable smoothness. 9 --> Detect design is a tricky term. It hints at, universal decoder algorithm, but computation theory points out that that is not a reasonable expectation. Instead we are looking at an explanatory filter that contemplates aspects of an entity, phenomenon or process and asks, can we on observable after the fact evidence infer causal factors crucial to the outcomes we see? 10 --> Mechanical necessity rooted in forces and materials of nature gives rise to natural regularities, such as ripe guavas and breadfruit or mangoes or even apples dropping at 9.8 N/kg from a tree. Likewise, attenuating for distance, the same flux accounts for the centripetal force on the Moon keeping it in orbit around Earth. And of course it took genius to make that connexion, and so doing launched the Newtonian synthesis. 11 --> If an aspect of a phenomenon or process shows high contingency of outcomes under similar initial conditions, the likely suspect is chance based stochastic contingency, e.g. a die falls, tumbles and settles to a reading. 12 --> But of course, if the die persistently shows unusual outcomes, we suspect loading . . . or at least that's what would happen in Las Vegas. The reason is of course that functional specificity of the pattern of outcomes beyond a certain point is increasingly implausible under stochastic contingency but very plausible under intelligently directed contingency and/or configuration and/or contrivance. (If you study a bit on loaded dice you will never be inclined again to bet anything significant on dice! Subtle are the ways of trickery. And yes, loading of dice is a good place to begin understanding design.) I trust this helps. KF kairosfocus
Did I shutdown this discussion yet? :) With serious interlocutors this discussion would get much deeper and interesting, but with mockers it goes nowhere. But still the onlookers/lurkers may benefit from seeing what's going on: bunch of interlocutors that avoid answering questions or complain about simple yes/no questions, calling them 'diversionary' (post 648) or accuse others of wanting to change the subject (post 651). Pathetic. Dionisio
#650 Me_Think #647 follow-up
Why – so everyone (including ID proponents) can understand how those terms are related) How- We can calculate one and derive other terms from single calculation To Whom -Pretty much everyone (including you) who needs to understand the terms and make sense of the calculations
Why do you need to understand the terms and make sense of the calculations? BTW, your assumption is wrong - you included me, but I don't need to understand those terms or make sense of those calculations. Dionisio
#651 keith s
Dionisio is particularly eager to change the subject,...
Why do you say so? What do you base your argument on? How do you know my intention? Dionisio
#650 Me_Think #647 follow-up
Why – so everyone (including ID proponents) can understand how those terms are related) How- We can calculate one and derive other terms from single calculation To Whom -Pretty much everyone (including you) who needs to understand the terms and make sense of the calculations
Apparently KF and gpuccio have explained those terms extensively on more than one occasion. They have provided links to those explanations. Why do they have to write it all over again? Please, keep in mind that those two gentlemen have daily responsibilities at their work. They seem to use a substantial part of their limited spare time to write in this blog. Their dedication to writing here is very commendable and highly appreciated by some of us here. Just read what they wrote already. If you have specific questions, ask them. But make sure your questions sound serious. Otherwise KF and gpuccio have all the right to ignore your questioning, although for the sake of the onlookers/lurkers most times they answer anyway. Generally speaking, from my observations here, both KF and gpuccio seem much more patient than me when it comes to answering questions. Most times they go directly to explaining things in details, with much pedagogy. I admire their enormous patience. Definitely I lack such virtue (well, I don't possess any virtue, as far as I know). Perhaps that's one of the reasons DGN_Jock prefers to discuss anything with gpuccio and KF rather than with me. He's right. I would think twice before engaging in a discussion with such a nasty guy like me, who respond to questions with more questions. :) Dionisio
Dionisio is particularly eager to change the subject, because CSI, FSCO/I, and dFSCI are taking an absolute drubbing in this thread. What an embarrassment for ID. keith s
Dionisio @ 647 Why - so everyone (including ID proponents) can understand how those terms are related) How- We can calculate one and derive other terms from single calculation To Whom -Pretty much everyone (including you) who needs to understand the terms and make sense of the calculations Me_Think
#648 Reality
Dionisio, your diversionary questions have nothing to do with what I pointed out.
Who said they have anything to do with that? Do you only respond questions that have to do with what you have pointed out? The first question in that post was just a simple 'yes/no' question. No one has asked you to answer other questions. Read the post carefully. Go ahead, try again. I'm sure you can do much better than this. :) Dionisio
Dionisio, your diversionary questions have nothing to do with what I pointed out. Reality
#644 Me_Think
KF has closed comment threads about dFSCI and gpuccio explained that dFSCI doesn’t detect design, only confirms if a design is real design or apparent design. Joe refers to FSC paper which discusses fit and not bit, so it would be helpful if someone can give the relationship between all those ID units to detect/confirm design .
Why would that be helpful? How could it be helpful? Helpful to whom? Dionisio
Me_Think Please, can you answer the first question in post #645? Thank you. Dionisio
#643 Reality Have you ever dealt with questions like these?
What makes myosin VIII to become available right when it’s required for cytokinesis? Same question for actin. What genes are they associated with? What signals trigger those genes to express those proteins for the cytokinesis? BTW, how does the transcription and translation processes for those two proteins look like? Are they straightforward or convoluted through some splicing and stuff like that? Are there chaperones involved in the post-translational 3D folding? Where is it delivered to? How does that delivery occur? How does the myosin pull the microtubule along an actin filament? How many of each of those proteins should get produced for that particular process? Any known problems in the cases of deficit or excess?
Dionisio
Dionisio @ 642
Hasn’t your question been addressed by KF and gpuccio before?
KF has closed comment threads about dFSCI and gpuccio explained that dFSCI doesn't detect design, only confirms if a design is real design or apparent design. Joe refers to FSC paper which discusses fit and not bit, so it would be helpful if someone can give the relationship between all those ID units to detect/confirm design . Me_Think
"Show actual chance variation plus actual differential reproductive success creating a new body plan or major body plan feature involving FSCO/I on our observation — the vera causa test. Not minor adaptations like oscillations in Finch beaks or loss of eyes in cave fish, actual addition of large scale functional organisation and info on realistic populations and time lines." kairosfocus's "challenge" is based on false and vague premises. FSCO/I is a term that he made up that has no credibility with evolutionary biologists and has nothing to do with evolutionary theory. His unreasonable expectations also include but don't define "realistic populations and time lines". He will apparently only accept what meets his YEC definition of "realistic populations and time lines". It is not the responsibility of evolutionary biologists or any other scientists to cater to kairosfocus's unreasonable, non-scientific expectations. Reality
#641 Me_Think Hasn't your question been addressed by KF and gpuccio before? Can you just read what they wrote about it? Dionisio
keith s you may want to attempt meeting this ‘easy’ challenge, courtesy of KF: Show actual chance variation plus actual differential reproductive success creating a new body plan or major body plan feature involving FSCO/I on our observation
Is FSCO/I = dFSCI = FSC = CSI ? Me_Think
635 DNA_Jock
Sorry to disappoint, but I am not a Young Earth Creationist. They are the only people according to whose world view it should be possible to meet kf’s vera causa test.
Are you sure about what you wrote? I think that there are many folks in this UD blog who would agree with KF's challenge, but they are not YECs. I'm not a YEC myself (I don't even understand exactly what that acronym stands for). Actually, I don't even consider myself an ID proponent, though I agree with many of the central concepts associated with ID. My identity is not in any of those acronyms. Did you understand this now? Dionisio
keith s you may want to attempt meeting this 'easy' challenge, courtesy of KF:
Show actual chance variation plus actual differential reproductive success creating a new body plan or major body plan feature involving FSCO/I on our observation — the vera causa test. Not minor adaptations like oscillations in Finch beaks or loss of eyes in cave fish, actual addition of large scale functional organisation and info on realistic populations and time lines.
Dionisio
Learned Hand you may want to attempt meeting this 'easy' challenge, courtesy of KF:
Show actual chance variation plus actual differential reproductive success creating a new body plan or major body plan feature involving FSCO/I on our observation — the vera causa test. Not minor adaptations like oscillations in Finch beaks or loss of eyes in cave fish, actual addition of large scale functional organisation and info on realistic populations and time lines.
Dionisio
#635 DNA_Jock As you should remember, I told you about my poor reading comprehension skills. :) I better let KF discuss that point with you directly. However, the last sentence can be removed without affecting the challenge. :) Dionisio
DNA_Jock Would you treat me better if I tell you that my wife and I have a beautiful orange tabby cat and that we had a golden retriever for almost 16 years when our children were younger? I agree with you that canines and felines are quite different in certain aspects of their physiology and specially in their behaviors, but remember that they all, along with us humans, share a FUCA and a LUCA, and came to being by the power of the magic 'n-D e' formula RV+NS+T. Hey buddy, you're much better than I am. But you may want to try being more considerate to those who are not as good as you are. You may want to learn from gpuccio's example. He is very nice to everyone, including his interlocutors. :) Dionisio
Dionisio, LMAO Sorry to disappoint, but I am not a Young Earth Creationist. They are the only people according to whose world view it should be possible to meet kf's vera causa test. It's "challenges" like this one that display the ignorance of most "IDiots" (that's kf's term, not mine). Here's a test of your comprehension skills: what is wrong with the final sentence of kf's challenge? DNA_Jock
#629 DNA_Jock
I love a challenge.
Oh, really? do you? Then, can you meet this one, courtesy of KF?
Show actual chance variation plus actual differential reproductive success creating a new body plan or major body plan feature involving FSCO/I on our observation — the vera causa test. Not minor adaptations like oscillations in Finch beaks or loss of eyes in cave fish, actual addition of large scale functional organisation and info on realistic populations and time lines. Production of FSCO/I involving intelligently directed contingency is a routine matter, and thanks to Venter et al, engineering of genes is demonstrated fact.
Dionisio
Learned Hand, We've tumbled into a world where Logic is not spoken. KF and gpuccio claim that FSCO/I are dFSCI are useful. Gpuccio suggested a test procedure to prove this. Yet both KF and gpuccio admit that you don't even need to do the calculation. It reveals absolutely nothing that you didn't already know. Why would anyone bother? Gpuccio, can you come up with a test procedure in which dFSCI actually does something useful, for a change? It's pretty clear why you and KF don't submit papers on this stuff. Even an ID-friendly journal would probably reject it, unless they were truly desperate. keith s
LH, Let me first point out at the nubbin of the matter, as I noted at 589 on even going to one bit per AA:
From my lost draft, I add, take Cy-C and halve the info metric value from Yockey, 125 bits call it. Say, 100 proteins of similar avg value per AA as the halved Cy-C; we have 12,500 functionally specific bits to get dna for and to get support machinery for already such as Ribosomes [which use a lot of RNA]. Such is well past 500 or 1000 bit thresholds. The only empirically warranted, needle in haystack blind search plausible source for such is design. And, by starting from a simple approach then adjusting per factors, we can see how we get there, though there is a lot of underlying work by Yockey etc there. Durston et al 2007 did fairly similar work, which is unfortunately not easy to follow for those likely to be reading a blog. Those who need it know where to find it. The point is, even if, after going through various factors we set about one y/n choice per AA as the info content of a typical protein on average, once we set that in the context of hundreds of proteins, we are back to the same basic conclusion — if we are willing to allow the force of inductive patterns of reasoning that undergird science. And believe us, there have been objectors about who would burn down not only induction but logic.
That is, at OOL, we already have that if we take 1 bit per AA, just on a toy model of 100 proteins of scale comparable to Cy-C we are still at 10,000 + bits of FSCO/I to account for, not to mention origin of a code and support system. Where the RNA, the organisation of the cell to make the proteins work and much more, are still to be reckoned with. Where, every bit beyond 1,000 bits doubles the config space beyond 1.07 *10^301 possibilities. At 10,000 bits, we are dealing with a space of 2 *10^3010, to be searched by blind watchmaker processes that can account for 10^111 atomic scale events of 10^80 atoms at 10^14 events per s (fast chem rxn rate) and 10^17 s. Sparse search of a big haystack, without any reasonable way to bring self replication and hoped for powers of chance variation and differential reproductive success to bear, as that is also to be accounted for. As for isolation of proteins, it is generally acknowledged that there are thousands of fold domains, which are largely unrelated to one another in terms of sequence, leading to deeply isolated islands of function. Cf the current thread here on that, which BTW is from a professional literature source. Typical numbers cited run about 1 in 10^65 to 10^77, as I recall. So even if Bradley made an error of citation, he is in the general ballpark. As well, the matter is that his logic is generally right and obviously so to someone who knows a little of how info content of a string is assessed. Which is why I gave the summary. We start fromteh causal chain end and see that there is no physical or chemical barrier to sequence succession so there is good reason to take the number of y/n q's to specify state as a good index of the info content implied in what is used. After that one may look at the actual patterns and do a SUM pi log pi measure, and may adjust for flexibility of characters that does not affect functionality. Do that, work with half his result, and even at 1 bit per AA, we end up with the same material result. FSCO/I needs to be explained and there is no good blind watchmaker thesis account that is adequatele observationally backed. Intelligently directed configuration aka design, routinely creates FSCO/I as posts in this thread show. Dead links are a bane, and I don't have time to do a deep search. I need to get on the horn, back to duty calls. Later. KF kairosfocus
D: That is the challenge. Maybe the most significant thing about the debate at and around UD since 2012 -- cf here a few days back -- is that there is a standing offer to host at UD a warranting case for the Darwinist blind watchmaker thesis origins narrative on chance and necessity from OOL up to us. After a full year I had to cobble together a composite response which basically conceded there is no OOL solid case, and was rather wobbly on origin of body plans. In the meanwhile I had taken up Wiki and Theobald as stand-ins, neither of which came across very well. Since then, there has been no interest whatsoever in a further more serious attempt. Much interest in attacking, dismissing ID thinking and supporters but little on laying out their own case. And yet, such a solidly, empirically grounded case would answer decisively. The dog that would not bark is telling. KF kairosfocus
KF
Show actual chance variation plus actual differential reproductive success creating a new body plan or major body plan feature involving FSCO/I on our observation — the vera causa test. Not minor adaptations like oscillations in Finch beaks or loss of eyes in cave fish, actual addition of large scale functional organisation and info on realistic populations and time lines. Production of FSCO/I involving intelligently directed contingency is a routine matter, and thanks to Venter et al, engineering of genes is demonstrated fact.
Clear challenge. That's it. Just don't hold your breath waiting for anyone to meet it soon (or ever). Dionisio
Gpuccio, Thank you for the kind words. You actually read, understand, and respond to what is addressed to you. I think this makes you unique amongst the regulars here. Even if it’s a “sidetrack”, having a discussion with you is enjoyable.
[ in reference to my having a dog] I have three cats. How could we ever expect to understand each other!
Actually, I too am a “cat person”, but in the interests of staying married, we have had two dogs. I think that dogs, even more so than non-human primates, offer a fascinating case-study in communication and empathy. Cats, on the other hand, are completely effing inscrutable; I love a challenge.
You will maybe admit that there is some pre-commitment to a specific worldview here. No problem in that. I respect pre-commitments. I have mine too. But they are different.
No argument here. EVERYONE has their pre-commitments. Sadly, many people are blissfully unaware of them. Exhibit 1: people who find ID (or MES) attractive for theological reasons. On a related topic, I find the interaction between (usually amateur) philosophers of science and people who have actually done basic research very entertaining. It may be my intellectual arrogance (not joking here), but I think the former are far, far more prone to confirmation bias. There’s a reason for that. ;)
I must say that I have great respect for Dionisio and for his contributions here.
I’m sorry to say I don’t see why. DNA_Jock
Kairosfocus, Sorry for the delay replying. The analysis you offer (in #587) by Bradley is interesting. Unfortunately, your citation for this analysis, [http://www.oneplace.com/common/pdf/ministries/creation_update/who_is_the_designer/07OriginOfLifePRNT.pdf], is dead. Do you have another? Perhaps this analysis been published in a mainstream biology journal? But we can discuss what you quote him as saying. I hesitate to use the word, but starting off with “pi = 0.05 for all residues”, which yields 4.32 bits per amino acid is , ahem, a strawman . Taking into account the observed frequencies in extant proteins is somewhat more honest, but still ignores the effect of correlation. This brings bits/aa down slightly, to 4.139. Not much of a movement; one might argue that the change is “not material”. However, I was able to find (Strait and Dewey, 1996), Bradley’s source for the 1 in 10^75 number. Strangely, that value is not to be found in the cited paper. Nevermind. OTOH Strait and Dewey do offer up a number of ways of trying to estimate the information content in extant proteins, methods which do at least try to take into account the lack on independence between amino acids, if only partially: using Zipf or k-tuplet analysis they obtain values of between 2.4 and 2.6 bits/aa; using a Chou-Fasman algorithm (that attempts to capture the effect of proteins being three-dimensional) they obtain a value of 2.0 bits/aa. Dewey’s previous work suggests that Kolmogorov complexity (a different beast, I admit, but a rather interesting one) is only 1 bit/aa. So, even using the sequences of extant proteins as the source data (which is inappropriate if we wish to understand the early evolution of a protein) we find that the reduction-in-uncertainty per residue drops two-fold when any attempt is made to try to account for correlation. Therefore, blithely asserting that the lack of independence is ‘not material’ is going to require some real data to support it. Of course to address this issue properly, you are going to have to also deal with the subject of my discussion with gpuccio: selection. Until you have dealt with both, you have not calculated p(T|H) for any biological. Ever. “we are looking at the results of deeply entrenched, often indoctrinated in core assertions of a system of thought.” I’ll say. P.S. You did not answer my question “Are you quite comfortable with Durston’s assumption that the exploration of insulin’s aa sequence has been a random walk, without any intervention?”. I understand your reticence; as I noted when I first asked you, it is a trap. P.P.S. Do any of the regulars here know why Bob Sauer used lambda repressor, perhaps the most highly evolved protein on the planet, for his mutagenesis study? It seems a somewhat biased choice. I do think there is an innocent explanation. DNA_Jock
Adapa
Now all you have to do is disprove the ability of genetic variation filtered by selection to produce new and often complex features. You only have about 70 years’ worth of empirical scientific data to overturn. Best of luck, let us know how it turns out.
Please define the terms "new and often complex features" and explain how their production supports the claim that Darwinian processes can produce macro-evolution. StephenB
Adapa, actually, scientifically, it is you who need to show that power. Show actual chance variation plus actual differential reproductive success creating a new body plan or major body plan feature involving FSCO/I on our observation -- the vera causa test. Not minor adaptations like oscillations in Finch beaks or loss of eyes in cave fish, actual addition of large scale functional organisation and info on realistic populations and time lines. Production of FSCO/I involving intelligently directed contingency is a routine matter, and thanks to Venter et al, engineering of genes is demonstrated fact. KF kairosfocus
StephenB It makes 1 a better explanation than 2. Now all you have to do is disprove the ability of genetic variation filtered by selection to produce new and often complex features. You only have about 70 years' worth of empirical scientific data to overturn. Best of luck, let us know how it turns out. Adapa
Adapa
1. Design did it 2. RV + NS did it 3. A currently unknown natural process besides RV +NS did it.
Why would you choose an unknown cause over a cause already known to produce the effect.
Disproving 2. doesn’t make 1. right.
It makes 1 a better explanation than 2.
That’s why science requires positive evidence for claims,...
And yet you appeal to an unknown cause. StephenB
gpuccio No dichotomies and no default. Just empirical science. Your whole ID argument rests on a false dichotomy and faulty logic. The scientific community recognizes it which is why the argument is rejected. Pity that your biases make you too blind to see it. Or perhaps at some level you do recognize the fatal flaws which is why you won't submit your ideas to any scientific journals. You gave it your best shot but you failed. It's not the end of the world. Props to you for trying. Adapa
Adapa: I am tired to listen to your "arguments". OK. 1. is a good explanation, the best explanation. Indeed, the only one available. 2. is a bad explanation, which does not work. 3. is no explanation at all, just wishful thinking. No dichotomies and no default. Just empirical science. gpuccio
gpuccio The simple point is: if I infer design for ATP synthase, it is because I am sure that nobody can offer an explicit algorithmic explanation for it. Blind faith in the undemonstrated powers of generic RV + NS is not a valid explanation. You couldn't make your basic logic failure any clearer. You are offering the classic false dichotomy. If RV + NS can't explain it then it must be Design by default. But you always forget the third option. 1. Design did it 2. RV + NS did it 3. A currently unknown natural process besides RV +NS did it. Disproving 2. doesn't make 1. right. That's why science requires positive evidence for claims, and why trying to support ID strictly by attempting to falsify ToE will never work. Adapa
Adapa, are you familiar with inductive logic? In that logic, one validates a pattern or explanation as sufficiently reliable on known cases, to trust or take seriously on cases where one cannot cross check. dFSCI, and the broader FSCO/I are tested and reliable on billions of cases. We have a buttressing analysis on sparse search for needles in haystacks, relative to atomic resources of observed cosmos or solar system. The conclusion is that such are highly reliable signs of design as cause. So, we have an epistemic right to trust them on cases where we do not directly see the causal story. Think about signs of arson as cause for a fire. KF kairosfocus
Adapa: Great. We really missed your contribution. gpuccio
LH: Pardon a side note. The simplest way is to apply (b^n)^m = c, turned into a log expression, log_10 X/ log_10 2 = Log_2 x In short take ratio of log x to a standard base to log 2 to said base. That's log of x to base 2. - log_2 x is really log_2 (1/x) = 0 - log_2 x (And onwards that is about a posteriori accuracy of detection. What was sent in the comm sys is accurately detected.) KF kairosfocus
gpuccio I have said that I will infer design for any string of more than 600 characters which has good meaning in English. Or for any software of more than 3000 bits which can receive lists of words and order them in Windows. Without knowing anything else of those sequences. I am not “starting from the conclusion that design happened”. I start form an observable property of the string itself. How useful is dFSCI? With dFSCI you can conclude design in strings you already know are designed. Good one. :) Adapa
KF: Thank you for your comments. Sometimes it is difficult to answer in detail, but I always try to do it when the interlocutor is asking true questions and proposing real arguments, either right or wrong. The results are usually not encouraging, but it is worthwhile to discuss and defend what we believe to be true. gpuccio
Learned Hand:
I don’t think that your procedure will ever generate a positive unless you start from the conclusion that design happened, or have some independent means of determining that design happened. Essentially, it only confirms that vastly unlikely events are vastly unlikely, and by ignoring known natural alternatives concludes that life is so unlikely it must have been designed.
This is simply false. I have said that I will infer design for any string of more than 600 characters which has good meaning in English. Or for any software of more than 3000 bits which can receive lists of words and order them in Windows. Without knowing anything else of those sequences. I am not "starting from the conclusion that design happened". I start form an observable property of the string itself. What you say is simply not true.
So I’m not sure I can give you a false positive, although I’ll put some thought into it.
I will wait.
In the meanwhile, why don’t you do the same? I’d be much more impressed with ID if its advocates took their own ideas more seriously.
Because I am sure that false positives don't exist. Because I take my ideas very seriously. And because I have not much time, and I don't want to waste it. gpuccio
GP: The fixation on the believed powers of subtraction and culling through differential reproductive success is such that I find it very useful to start at the problem of Darwin's pond or the like, to understand the point of the challenge of getting TO islands of function in large config spaces from arbitrary initial points, on sol system or cosmic resources that only allow very sparse search for the needle in the haystack. Next, it will be helpful for objectors to recognise that the usual term, natural selection, misses the supposed generator of actual novel info, some form or other of chance variation. NS REMOVES failed or relatively failed varieties, it does not generate them. And the hoped for incrementally functional advance up a continent of possibilities not only lacks warrant on the fossils but runs into the issue that major body plan units require well matched, interacting, multiple, correctly arranged parts that yield specific function, which are also embryologically feasible and coded in the zygote or whatever. That is going to confine to narrow zones or islands in the space of configs, e.g. to account for flying wings, one way flow lungs or the like. But, we are looking at the results of deeply entrenched, often indoctrinated in core assertions of a system of thought. Generally, per Lakatos, Kuhn et al, that is going to take crisis and collapse of the system -- similar to Marxism. (I assure you, the deeply locked in Marxists were not open to external critique, only when all collapsed did they find themselves bewildered and forced to question.) KF kairosfocus
Learned Hand:
But the question isn’t whether “we” could do it. Rather it’s whether there is any way in which it could be done short of design. You want the answer to be “no,” but you don’t know that the answer is “no,” because you can’t compute the probability of all the possible paths nature could have taken. But ignoring the unknowns doesn’t make them go away. It may be that we will never know enough to calculate the odds, in which case dFSCI will never work properly. Sometimes the things we most sincerely want to be true are not.
This is not a scientific argument. In the same way, I can say that you want very badly that it be possible, but you have nothing to show for that theory. The only scientific argument is: have you any evidence that your explanation can do that? And the answer is: No. You have no evidence that dFSCI can emerge in the way you describe. Just abstract wishful thinking. Have I any evidence that dFSCI can emerge by design? Yes, a lot. Tons. Indeed, all the dFSCI whose origin we know is designed. This is scientific and empirical reasoning. gpuccio
Learned Hand:
It’s quite bold to say “My procedure works” when you’ve never actually used it to successfully determine design that wasn’t apparent from some traditional analysis (such as recognizing English). It could possibly work under some circumstances, such as where random variation is truly the only alternative–but that takes life off the table. And it’s never actually worked in the real world.
It's not bold. It works. Recognizing English is no different from recognizing that an enzyme accelerates a reaction, or that a watch measures time. RV is the only alternative in all cases of language, software, machines. All these objects, if complex enough, are designed. They never arise an any other way. Natural algorithms cannot generate them. Random variation can generate simple configurations which can have simple functions without having been designed. Like "I am". It works. It works in real world. It works always. You are not convinced? Show where it does not work. Show a false positive. gpuccio
Learned Hand:
Yes, I could do the same. And I don’t know how to calculate -log2 of anything, so I won’t be using dFSCI to do it. Your procedure doesn’t do anything.
No. Wrong. See my answer to keith in post #597 for that. gpuccio
GP, your challenge of text generation beyond a threshold by blind chance and mechanical necessity is apt (I favour flattened off Zener noise or nice crackly frying sky noise . . . not pseudorandom sequences). I only note that within Sol system resources, 72 characters is enough, and for the observed cosmos, 143. On reports, I have seen 20 - 24 character strings. A serious look at the challenge will teach objectors much about what we have been saying. KF kairosfocus
Learned Hand:
You did not do a test. You did not actually calculate dFSCI for anything, and dFSCI is neither necessary nor helpful in determining that these posts are designed. We know they’re designed because we compare them to our personal experience of English communications, not through any calculation of generic designedness. I think it’s an important point: dFSCI is irrelevant to determining design not just in this case, but in all cases. There is no case I’m aware of that dSFSCI has ever been shown to work in the absence of the usual ways of detecting design.
What do you mean? Of course I did a test. We are comparing nothing. We are just evaluating if a passage means something in English. That is an objective property. Anyone who knows English well can answer. Let's say that, instead of language, we are evaluating software. Give me executable programs which, when open in a Windows XP system, can take lists of words and order them in alphabetical order. Let's say they are at least 3000 bits long. I will infer design for the. Show one which was generated by a random bit generator. A false positive. I am doing a test. Absolutely. gpuccio
LH: Indeed, once one recognises the existence of FSCO/I, s/he may infer to design without explicit calculation. However, the unit character of English text -- so, functionally specific -- and the threshold (over-generous) 600, are precisely an info beyond a threshold metric in the context of functional specificity. Using ASCII, one character is equivalent to seven bits, if you wish a more familiar unit. KF kairosfocus
Learned Hand:
Unless you were unaware of (as per (h)) or ignored (as per (i)) an alternative that could produce that object. Which means that your results are completely determined by your state of mind, making them not only subjective but extremely susceptible to bias. Someone who has a deep, heartfelt desire for the dFSCI calculation to show that life was designed, for example, has an enormous incentive to not see alternatives and thus return a false positive.
This would be a comment to my statement: "IOWs, there is no object in the universe (of which we can know the origin independently) for which we would infer design by this procedure and be wrong." I don't understand. I am saying that empirically nobody can present an object which will be judged as dFSCI by my procedure and be a false positive. I am speaking of objects of which we can know the origin independently, as you can notice. So, you can simply show me an object which has the properties you describe: with a function for which I can compute a satisfyingly high dFSCI, which will evoke in me no suspicion of being algorithmic and will induce me into error. Do it. You can start by showing a 600 character long sequence which appear to me as having good meaning in English, but was randomly or algorithmically generated. Do it. gpuccio
Learned Hand:
This is a grandiose claim. Why not test it so that you can prove it? So far your response to that suggestion–correct me if I’m wrong, please–has been to ask other people to test it for you by providing you with subjects. It’s not your obligation to test your own theories; I’m sure you have your own job and hobbies. But grandiose claims that the claimant doesn’t bother to test set off my alarm bells. It sounds very much like the equivalent of ID in the sphere I’m more familiar with, law and finance. If someone claimed to have a machine that would predict futures prices in advance, but when asked to prove it responded, “You do it!”, they’d be laughed out of the building.
It is a claim as grandiose as it is true. But why do you say that I do not test it? that I ask others to test it for me? That is simply not true. I have tested it here. I have inferred design by dFSCI for all the posts her that are longer than 600 characters. And I have asked all here to provide some equivalent sequence which was generated in a random character generator, or by simple mathematical algorithms, and which will cause my error and be judged by me a positive, while it is not. I am doing the test, not others. I could generate the random strings myself, but why? I an sure of the result. So, I am asking many more or less hostile readers to show those false positives in which they seem to have faith. Do that. You, do it. I already know that those false positives do not exist. What do you want me to do? To generate random strings and post them here? You have to explain why it is so easy for me to infer design for all those sequences which have a good English meaning, and why I am so sure that no one will provide a false positive, if you really think that the dFSCI procedure has no value, or is circular, or whatever. Please note that I will infer design for all sequences longer than 600 characters which have good meaning in English. Without knowing anything of their origin. If my dFSCI reasoning is wrong, I am taking a huge risk. Why am I not worried at all? gpuccio
Learned Hand:
You’ve obviously put a lot of thought into this, so I’m surprised to see this. You know that “random variation” isn’t the proposal on the table from mainstream science. Your procedure is designed to test for design against a strawman. If you don’t know or can’t calculate the effects of selection, simply ignoring them doesn’t make the problem go away. It may be difficult to calculate the effects of many planetary bodies on the orbit of a moon, but ignoring them doesn’t make them go away–it only makes the calculation inaccurate.
No. The computation of functional complexity eliminates RV. Selection can be considered, but only if demonstrated. I have discussed this explicitly elsewhere. NS can work only if complex functional sequences can be deconstructed, in the general case, into simpler steps which are "bridges" at sequence level, and each of which can be expanded by positive NS because it confers reproductive advantage. So, the algorithm is: A (initial state, unrelated at sequence level to B, the new functional state; what I am discussing is the emergence of a new functional protein, of a new superfamily, for which there is no precious homologue known). So, again: A -> A1 (small transition, in the range of RV) -> Expansion of A1 (reproductive advantage, NS) -> A2 -> Expansion of A2 -> ... B (new functional state). Objections: a) There is no logical reason why A1, A2, An should exist. b) There is no empirical evidence that they exist. c) If each of them was selected and expanded, why is there no trace of them in the existing proteome? IOWs, NS can easily be rejected as an explanation for functional proteins. Unless you can provide an explicit pathway, and demonstrate that it exists. The problem here is not to calculate "the effects of many planetary bodies". The problem here is to provide any evidence that those bodies exist. gpuccio
Learned Hand:
This is the flaw I’m most focused on. I don’t know if it’s the most serious problem with your procedure, but it’s the most comprehensible to me. First, as noted above, this makes the calculation entirely subjective. “2 + 2? is objective; “2 + the number of coins in your pocket” is subjective and the result will change from person to person. And as you yourself note, any one person’s calculations can change over time as their knowledge grows. At the very least, even if we discard the question of subjectivity, that means that this calculation is inherently susceptible to false positives. If you calculate dFSCI today and decide that it indicates design, you could learn tomorrow of a natural algorithm that explains the sequence. Your initial positive was therefore a false positive, exactly the result CSI isn’t supposed to return. Am I missing something? Do you not sign on to the usual claim that CSI can’t return false positives?
I have already answered, but I will add some more comments, as I understand that this point is important for you. It is a false problem. The purpose of eliminating necessity in Dembski's EF and in my procedure is essentially to rule our ordered sequences. That is not really a problem for my subset of specification, because complex digital functional specification is usually limited to three different sets of sequences: language, software, biological sequences. None of these three kinds of sequence complexity can be explained by natural algorithms (unless you believe in the myth of RV + NS). That's why, if the sequence is not ordered, and if its function depends only by processes which are strictly connected to a conscious understanding, the exclusion of algorithms is irrelevant. Just to be more clear: why am I so sure that no natural algorithm, or even simple mathematical algorithm, can write a sonnet in English? It's simple: because no natural or mathematical algorithm can know anything about meaning, and about English language. And there is no reason to believe that some day we will find some natural system which generates English sonnets. Only random variation could do that, and if the complexity is high enough random variation is out of the game. The same is true with software. Can you imagine some simple environmental algorithm which can generate the code for a spreadsheet? Do you really think that something like that can ever be found? No. Those things require understanding, programming, conscious search and computation. They can only be designed. Therefore, the problem is essentially to compute the functional information necessary to implement the function (the target space) to eliminate random events. The same is true for proteins. The sequence of nucleotides which will code for the correct sequence of AAs in a functional enzyme can only be found by knowing the laws of biochemistry (top down), or by some painful intelligent bottom up research guided by Intelligent Selection (for the difference between NS and IS, please refer to my previous answers to DNA_Jock). No biochemical algorithm can find those sequences by necessity. RV + NS is a false answer which has tried to deny that simple truth. It is false, it does not work, and cannot work. But that is another discussion. The simple point is: no explicit pathway based on RV and NS is knoon for any complex functional protein. gpuccio
Learned Hand:
Here I’m criticizing again. If I understand correctly, you’re considering the target space to be the specific function of the subject. What about all the other possible results? If you were calculating dFSCI for a hand of cards you’d consider the target space to be larger than that one specific hand, wouldn’t you? So how do you determine, with a subject as complex as life, the scope of the target space? All the other pathways that could have led to the same, or any other equally functional, result?
I am not sure I understand your point here. There are two possible aspects, so I will answer both: a) dFSCI is computed for an explicitly defined function, and an explicitly defined way of measuring and assessing it. IOWs, for each function definition we must be able, in principle, to assess if the defined function is present or absent in each possible sequence of the search space. That's what I mean when I say that our definition of function generates a binary partition in the search space. b) Obviously, for big search spaces we cannot really measure the function in each possible sequence. So, the target space must be evaluated indirectly. For proteins, that can be made by approximation by the Durston method, or simply, in some cases, by my shortcut based on highly conserved sequences, as I have proposed for ATP synthase ans histone H3. The aim is not to have an exact number, but an order of magnitude which is definitely higher than our threshold. In general, we are looking for a lwoer threshold of functional complexity. gpuccio
Learned hand:
How do you define “function”? Does it require action, or can a state (such as being beautiful, or being hot, or being conductive) be a function? I ask because I don’t understand, not Socratically.
I have dedicated a whole OP to that. Here is the link: https://uncommondesc.wpengine.com/intelligent-design/functional-information-defined/ For your convenience, I quote here the most relevant part, but maybe you should read the whole post: "That said, I will try to begin introducing two slightly different, but connected, concepts: a) A function (for an object) b) A functionality (in a material object) I define a function for an object as follows: a) If a conscious observer connects some observed object to some possible desired result which can be obtained using the object in a context, then we say that the conscious observer conceives of a function for that object. b) If an object can objectively be used by a conscious observer to obtain some specific desired result in a certain context, according to the conceived function, then we say that the object has objective functionality, referred to the specific conceived function. The purpose of this distinction should be clear, but I will state it explicitly just the same: a function is a conception of a conscious being, it does not exist in the material world outside of us, but it does exist in our subjective experience. Objective functionalities, instead, are properties of material objects. But we need a conscious observer to connect an objective functionality to a consciously defined function." gpuccio
Learned hand:
Your proposal puts me in mind of someone presenting a method for determining prime numbers, where that method can only exclude candidates if the tester already knows that they are not prime. Assuming no other method for determining primes, the results are going to differ from tester to tester based on their knowledge and beliefs. Like dFSCI, that’s not going to return consistent or accurate results.
I don't understand what that has to do with my procedure. You statement seems to be circular, if I understand it well (I am not really sure what you mean), while dFSCI is not a circular procedure. In dFSCI we exclude those specific cases where a sequence (usually showing some form of order) can be explained by a known algorithm which can operate in the system we are studying. This is a scientific empirical evaluation, like all scientific procedures. Again, like many others, I suspect that you are confounding empirical scientific methods with mathematical demonstrations. I maintain that my procedure, correctly applied, has no false positives. You say that there have to be virtually a lot of them. Please, show one. It should be easy. gpuccio
Learned Hand:
That not only makes the process subjective, it virtually guarantees false positives. After all, if you and I both apply the tool but you know more about the relevant algorithms than I do, I may decide there is no “explicit algorithm available in the system can explain the sequence.” You may apply your greater knowledge and conclude that there is. I have therefore reached a subjective false positive, despite correctly calculating dFSCI according to your procedure.
No. Science is made by sharing knowledge, not by personal secrets. If I apply the dFSCI procedure, I must be aware of all that is available about the problem that I am analyzing, exactly as I must know quantum mechanics to solve a quantum mechanics problem, and I must be an updated medical doctor to heal a patient. To apply the procedure correctly means to have full knowledge of what has to be known about our system and our problem and the current scientific data and theories about it. Again, please offer a false positive to my design detection for the Shakespeare sonnet. After all, someone could know a simple algorithm which can write a sonnet in good English without any conscious intervention and without any added information. That would reduce the complexity of any 600 character passage in good English to a much lower number of bits (those of the algorithm), and we should reconsider everything. But that is simply not true. So, I will maintain my inference. If you have a false positive, show it. gpuccio
Learned Hand: Now, let's go to your post #594.
That you would write this and then go on to describe a completely subjective measurement is surprising to me. How can dFSCI be objective if the calculation inherently, explicitly depends on your knowledge of design alternatives? Your technique only reports that something is designed if you don’t know of any non-design alternatives. Thus, two different people applying your technique correctly, without error, can easily arrive at two different conclusions.
No. What is required is just an analysis of what we are observing. If it shows regularity, an algorithmic explanation must be suspected and thoroughly excluded. This is normal scientific methodology. In the case of function like language, software and protein sequences, it is easy to exclude any algorithmic origin present in the system. Let's restrict the debate to protein coding gene sequences. All that we know of the biochemical laws proves that a complex functional protein sequence can never arise algorithmically, because no natural laws is aware of what is necessary to have a protein which folds and works to get some specific result. An algorithmic explanation is always related to some order, because a necessity law is simple and generates order. For example, a protein which is made only of one AA can be explained by some biologic system where only that aminoacid is available. So, if I define a function for that protein, that function is not complex, because it can be explained algorithmically (IOWs, its Kolmogorov complexity is low). But if I want to compute the sequence of a functional enzyme of sufficient functional complexity, I need specific understanding of biochemical laws and computational power, and a lab where to apply an intelligent procedure, and even so that result is still beyond our power of intelligent designers. There is only one necessity algorithm proposed to explain functional proteins: NS applied to RV. As I have said many times, if it works then all the design detection procedure applied to proteins is false. That is good. It means that the biological argument based on dFSCI can be falsified, and is a correct scientific theory in the Popperian sense. The reasons why RV + NS is not an explicit explanation for any known protein of functional complexity has been debated many times. It is an integral part of the design detection theory in biology. So, to sum up: there is nothing subjective. The problem is not if I know an algorithm which can explain proteins. The simple point is that no such algorithm must be available. If I want to be really thorough, I can make an Internet search, consult specialists in the field, and so on. The simple point is: if I infer design for ATP synthase, it is because I am sure that nobody can offer an explicit algorithmic explanation for it. Blind faith in the undemonstrated powers of generic RV + NS is not a valid explanation. gpuccio
Learned Hand: "As already noted, dFSCI would not return a positive for four characters. He’s right–dFSCI is a useless step in that “test.” When is it useful? Has dFSCI (or CSI or F/SCIO) ever detected design that was otherwise not apparent?" I will answer you #596 first, because it can probably help in the answers which will follow to your previous more detailed analysis. You have it wrong. The whole purpose of dFSCI and of design detection is not to detect design that is not otherwise apparent. It cannot do that, because it is a procedure with low sensitivity. The purpose is exactly the opposite: to confirm that objects whose design is apparent are truly designed, and that the appearance of design is not a pseudo-design. IOWs, design detection serves to distinguish between true design (apparent) and pseudo-design (apparent too). Between a painting made by an artist and the clouds in the sky which resemble something. Is that clear? Now, don't say that it is useless. It is very useful. Isn't it useful to know that some artifact is a true designed artifact, and not a stone shaped by the wind? Isn't it useful to know that ATP synthase was designed, and did not arise by non conscious events? Obviously, sometimes the function can be difficult to recognize. For example, if I see a binary sequence, I could not understand immediately that it is a working software. That means that the recognition of a function is a science of its own. But the recognition of a function, while necessary to infer design, is not the design inference. The design inference relies on the demonstration that the recognized function is complex and not algorithmic. Of course, we can only infer design for functions that we recognize and define explicitly. gpuccio
keith s: "The two procedures give exactly the same results, yet the second one doesn’t even include the dFSCI step. All the work was done by the other steps. The dFSCI step was a waste of time, mere window dressing." I am surprised that you still do not understand. See KF's comment at #588, which is perfectly correct. Your procedure is an evaluation of dFSCI, in the phrase: "1. Look at a comment longer than 600 characters. 2. If you recognize it as meaningful English, conclude that it must be designed." That implies a definition of the function, a measurement of the search space, and the statement of a threshold which, by its length alone, guarantees a sufficient target space / search space ratio. It's dFSCI calculation all the way. The only point is that I have not a simple way to measure the target space for English language, so I have taken a shortcut by choosing a long enough sequence, so that I am well sure that the target space /search space ratio is above 500 bits. As I have clearly explained in my post #400. For proteins, I have methods to approximate a lower threshold for the target space. For language I have never tried, because it is not my field, but I am sure it can be done. We need a linguist (Piotr, where are you?). That's why I have chosen and over-generous length. Am I wrong? Well, just offer a false positive. For language, it is easy to show that the functional complexity is bound to increase with the length of the sequence. That is IMO true also for proteins, but it is less intuitive. gpuccio
Your “test” is nothing more than this: 1. Look at a comment longer than 600 characters. 2. If you recognize it as meaningful English, conclude that it must be designed. 3. Perform a pointless and irrelevant dFSCI calculation. 4. Conclude that the comment was designed. Why not omit step 3, since it is useless?
Because it is not useless. You could easily enough attain, in a random character generator, the phrase: “I am”. Which has perfect sense in English.
As already noted, dFSCI would not return a positive for four characters. He's right--dFSCI is a useless step in that "test." When is it useful? Has dFSCI (or CSI or F/SCIO) ever detected design that was otherwise not apparent? Learned Hand
gpuccio @ 580
I have been deeply engrossed, in the last months, in what is known about epigenetic control of differentiation, and it is refreshing to study a biological argument which screams design and where RV + NS is rarely invoked explicitly, maybe because of some residual modesty.
Also could it be that many serious researchers-who are busy trying to figure out how those elaborate molecular/cellular choreographies are orchestrated- don't have time to squander on senseless OOL issues? Perhaps some of them make quick mentions of a few 'required' "n-D e" keywords in order to appease some 'still influential' personalities within the academic establishment and to keep the censorship police away? After all, couldn't grants be denied if the proposed research could shake the foundations of some long-held 'proven' ideas printed year after year in profitable textbook publications by 'influential' personalities who sit in boards that approve those grants? I've heard from scientists I know that this is the case more often than one could think of. c'est la vie mon ami! Remember that in a language spoken by many people in this world 'horror show' means 'good'. Again, very commendable and highly appreciated effort you're making. Eccellente!!! Keep it up!!! Dionisio
gpuccio,
Well, I have not been so brief, after all.
No, but you're gracious to write so much. Thank you for the effort and the clarity.
I will explain what is “simple, beautiful and consistent” about CSI. It is the concept that there is an objective complexity which can be linked to a specification, and that high values of that complexity are a mark of a design origin. This is true, simple and beautiful. It is the only objective example of something which can only derive from a conscious intentional cognitive process.
That you would write this and then go on to describe a completely subjective measurement is surprising to me. How can dFSCI be objective if the calculation inherently, explicitly depends on your knowledge of design alternatives? Your technique only reports that something is designed if you don't know of any non-design alternatives. Thus, two different people applying your technique correctly, without error, can easily arrive at two different conclusions. That not only makes the process subjective, it virtually guarantees false positives. After all, if you and I both apply the tool but you know more about the relevant algorithms than I do, I may decide there is no "explicit algorithm available in the system can explain the sequence." You may apply your greater knowledge and conclude that there is. I have therefore reached a subjective false positive, despite correctly calculating dFSCI according to your procedure. Your proposal puts me in mind of someone presenting a method for determining prime numbers, where that method can only exclude candidates if the tester already knows that they are not prime. Assuming no other method for determining primes, the results are going to differ from tester to tester based on their knowledge and beliefs. Like dFSCI, that's not going to return consistent or accurate results.
a) I define a specification as any explicit rule which generates a binary partition in a search space, so that we can identify a target space from the rest of objects in the search space. b) I define a special subset of SI: FSI. IOWs, of all possible types of specification I choose those where the partition is generated by the definition of a function. .... d) I can define any function I like for the object, including different functions for the same object. Maybe I can’t find any function for the object.
How do you define "function"? Does it require action, or can a state (such as being beautiful, or being hot, or being conductive) be a function? I ask because I don't understand, not Socratically.
f) I compute, or approximate, as much as possible, the target space, and therefore the target space/search space ratio, and take -log2 of that. This is the dFSI of the sequence for that function.
Here I'm criticizing again. If I understand correctly, you're considering the target space to be the specific function of the subject. What about all the other possible results? If you were calculating dFSCI for a hand of cards you'd consider the target space to be larger than that one specific hand, wouldn't you? So how do you determine, with a subject as complex as life, the scope of the target space? All the other pathways that could have led to the same, or any other equally functional, result?
h) I consider if the sequence has any detectable form of regularity, and if any known explicit algorithm available in the system can explain the sequence. The important point here is: there is no need to exclude that some algorithm can logically exist that will be one day found, and so on. All that has no relevance.
This is the flaw I'm most focused on. I don't know if it's the most serious problem with your procedure, but it's the most comprehensible to me. First, as noted above, this makes the calculation entirely subjective. "2 + 2" is objective; "2 + the number of coins in your pocket" is subjective and the result will change from person to person. And as you yourself note, any one person's calculations can change over time as their knowledge grows. At the very least, even if we discard the question of subjectivity, that means that this calculation is inherently susceptible to false positives. If you calculate dFSCI today and decide that it indicates design, you could learn tomorrow of a natural algorithm that explains the sequence. Your initial positive was therefore a false positive, exactly the result CSI isn't supposed to return. Am I missing something? Do you not sign on to the usual claim that CSI can't return false positives?
i) I consider the system, the time span, and therefore the probabilistic resources of the system (the total number of states that the system can reach by RV in the time span). So I define a threshold of complexity that makes the emergence by RV in the system and in the time span of a sequence of the target space an extremely unlikely event. For the whole universe, Dembski’s UPB of 500 bits is a fine threshold. For biological proteins on our planet, I have proposed 150 bits (after a gross calculation).
You've obviously put a lot of thought into this, so I'm surprised to see this. You know that "random variation" isn't the proposal on the table from mainstream science. Your procedure is designed to test for design against a strawman. If you don't know or can't calculate the effects of selection, simply ignoring them doesn't make the problem go away. It may be difficult to calculate the effects of many planetary bodies on the orbit of a moon, but ignoring them doesn't make them go away--it only makes the calculation inaccurate.
m) Why? This is the important point. This is not a logical deduction. The procedure is empirical. It can be applied as it has been described. The simple fact is that, if applied to any object whose origin is independently known (IOWs, we can know if it was designed or not, so we use it to test the procedure and see if the inference will be correct) it has 100% specificity and low sensitivity. IOWs, there are no false positives.
This is a grandiose claim. Why not test it so that you can prove it? So far your response to that suggestion--correct me if I'm wrong, please--has been to ask other people to test it for you by providing you with subjects. It's not your obligation to test your own theories; I'm sure you have your own job and hobbies. But grandiose claims that the claimant doesn't bother to test set off my alarm bells. It sounds very much like the equivalent of ID in the sphere I'm more familiar with, law and finance. If someone claimed to have a machine that would predict futures prices in advance, but when asked to prove it responded, "You do it!", they'd be laughed out of the building.
IOWs, there is no object in the universe (of which we can know the origin independently) for which we would infer design by this procedure and be wrong.
Unless you were unaware of (as per (h)) or ignored (as per (i)) an alternative that could produce that object. Which means that your results are completely determined by your state of mind, making them not only subjective but extremely susceptible to bias. Someone who has a deep, heartfelt desire for the dFSCI calculation to show that life was designed, for example, has an enormous incentive to not see alternatives and thus return a false positive.
Now, I will do a quick test. There are 560 posts in this thread. While I know independently that they are designed things, for a lot of reasons, I state here that any post here longer than 600 characters, and with good meaning in English, is designed.
You did not do a test. You did not actually calculate dFSCI for anything, and dFSCI is neither necessary nor helpful in determining that these posts are designed. We know they're designed because we compare them to our personal experience of English communications, not through any calculation of generic designedness. I think it's an important point: dFSCI is irrelevant to determining design not just in this case, but in all cases. There is no case I'm aware of that dSFSCI has ever been shown to work in the absence of the usual ways of detecting design.
And I challenge you to offer any list of characters longer than 600, as many as you like, where you can mix two types of sequences: some are true posts in good English, with a clear meaning, taken from any blog you like. Others will be random lists of characters, generated by a true random character generator software.
Yes, I could do the same. And I don't know how to calculate -log2 of anything, so I won't be using dFSCI to do it. Your procedure doesn't do anything.
This is a challenge. My procedure works. It works not because is is a logical theorem. Not because I have hidden some keithian circularity in it (why should a circular procedure work, at all?). It works because we can empirically verify that it works. . . . [B]iological objects. They are the only known objects in the universe which exhibit dFSCI, tons of it, and of which we don’t know the origin.
It's quite bold to say "My procedure works" when you've never actually used it to successfully determine design that wasn't apparent from some traditional analysis (such as recognizing English). It could possibly work under some circumstances, such as where random variation is truly the only alternative--but that takes life off the table. And it's never actually worked in the real world.
You cannot find the sequence of ATP synthase by a simple algorithm. Maybe we could do it by a very complex algorithmic search, which includes all our knowledge of biochemistry, present and future, and supreme computational resources.
But the question isn't whether "we" could do it. Rather it's whether there is any way in which it could be done short of design. You want the answer to be "no," but you don't know that the answer is "no," because you can't compute the probability of all the possible paths nature could have taken. But ignoring the unknowns doesn't make them go away. It may be that we will never know enough to calculate the odds, in which case dFSCI will never work properly. Sometimes the things we most sincerely want to be true are not.
Or, if you just want to falsify my empirical procedure, offer a false positive. I am here.
I don't think that your procedure will ever generate a positive unless you start from the conclusion that design happened, or have some independent means of determining that design happened. Essentially, it only confirms that vastly unlikely events are vastly unlikely, and by ignoring known natural alternatives concludes that life is so unlikely it must have been designed. So I'm not sure I can give you a false positive, although I'll put some thought into it. In the meanwhile, why don't you do the same? I'd be much more impressed with ID if its advocates took their own ideas more seriously. Learned Hand
gpuccio @ 580
The only reason why I have not yet completed my "announced" post about the procedures is that the subject is so complex and fascinating that I am still studying and reflecting.
As far as I can see, that’s one of the most difficult articles anyone could ever attempt to write in any serious blog, where participants make references to biology issues these days. Most probably a definitive game changer for many future discussions here and elsewhere. I won't be surprised if your first OP generates a long series of related OPs with no foreseeable ending. Actually, that could also be the draft for a very interesting publication, in the form of a book or your own separate blog. 100% valid reason to take your time. Very commendable and highly appreciated. Thank you. :) Eccellente!!! Mile grazie!!! Dionisio
KF, That fails to address keiths's point that no form of CSI calculation is necessary to decide that recognizable English is recognizable English. It's simply putting a mathy dressing on what is not a mathematical calculation: recognizing something that we've seen before, and for which the specific mechanism (as opposed to "some kind of design") is not only known but familiar. I have yet to see even proposed a task for which CSI is actually necessary or useful in identifying design. It is not surprising, therefore, that IDsts have failed to ever use CSI productively in the real world despite enormous incentives to do so. Learned Hand
"The only reason why I have not yet completed my 'announced' post about the procedures is that the subject is so complex and fascinating that I am still studying and reflecting." -gpuccio @580
As far as I can see, that's one of the most difficult posts anyone could attempt to write in any science-related blog these days. 100% valid reason to take your time. Very commendable and highly appreciated. Thank you. :) Dionisio
"So, much of the above objectionism towards the design view is a case of straining at gnats while swallowing camels: selective hyperskepticism about what one is disinclined to accept multiplied by hypercredulity on any rays of faint hope for what is desired given the a priori evolutionary materialism and fellow travellers dominant in science and science education." -kf
Dionisio
F/N: From my lost draft, I add, take Cy-C and halve the info metric value from Yockey, 125 bits call it. Say, 100 proteins of similar avg value per AA as the halved Cy-C; we have 12,500 functionally specific bits to get dna for and to get support machinery for already such as Ribosomes [which use a lot of RNA]. Such is well past 500 or 1000 bit thresholds. The only empirically warranted, needle in haystack blind search plausible source for such is design. And, by starting from a simple approach then adjusting per factors, we can see how we get there, though there is a lot of underlying work by Yockey etc there. Durston et al 2007 did fairly similar work, which is unfortunately not easy to follow for those likely to be reading a blog. Those who need it know where to find it. The point is, even if, after going through various factors we set about one y/n choice per AA as the info content of a typical protein on average, once we set that in the context of hundreds of proteins, we are back to the same basic conclusion -- if we are willing to allow the force of inductive patterns of reasoning that undergird science. And believe us, there have been objectors about who would burn down not only induction but logic. KF kairosfocus
KS:
Procedure 2: 1. Look at a comment longer than 600 characters. 2. If you recognize it as meaningful English, conclude that it must be designed. 3. Conclude that the comment was designed.
This failed to recognise that characters of meaningful English is a valid metric of functionally specific information. Taking ASCII text, 7 bits per character. The length 600 is quite over-generous [143 characters is 1000 bits], but constitutes a complexity degree threshold beyond which it is unreasonable to infer that a sparse blind search on the gamut of the observable cosmos could plausibly generate the string. But as FSCO/I is routinely and empirically reliably known to be produced by intelligently directd configuration, it is reasonable to infer inductively as best current explanation that the sample in view was designed. Even that is corrective of the way you presented the design inference, despite many corrections across years. KF kairosfocus
DJ: I think I need to comment re this zero concessions to IDiots remark at 570:
I only visited UD in order to point out to Kairosfocus his shoddy math.
I suggest to you with all due respects that you are being inappropriately hyperskeptical, especially starting from the OOL on up, what the Smithsonian accepts is the root of the TOL. In Darwin's pond or the like, there is a chaining chemistry of D/RNA and AAs that allows any of the 4 or 20 elements to follow any other, even after looking at the issues of chirality, cross-reactions and energetically uphill chemistry etc. and setting them to one side for the sake of argument. I must insist we start from OOL, as this is what allows us to understand the problem most clearly. We need to get to a gated, encapsulated metabolising cell with an integral von Neumann self replicator. That is what is empirically relevant and observed. When you can show empirically actual other architectures of biological life and how they bridge to what we see, then that would become relevant. In that context, we note that there are hundreds of proteins at minimum needed (including enzymes), and tha the causal chain runs:
D/RNA --> Ribosome + tRNAs etc --> AA chain, adjustments and folding, agglomeration --> biofunction
There is no pattern where functional configs can cause changes to codes ahead of time so they can come into existence. Nor, given sparseness of fold-function protein string clusters in AA space, is there a credible warrant for a simple stepping stones incremental pattern across hundreds of relevant proteins. Nor, does functionally specific, complex organisation leading to interactive coupling and function come about for free. Of course, one may go back-ways and look from functioning proteins and impose a priori evolutionary materialism constraints and think that assessing info content on the basic facts of freedom of chaining, is dismissible. But, I wold think that it is not unreasonable to look at that basic point first. Where, for instance, Shannon himself used the counting of possibilities in a chain as an information metric in that famous 1948 paper. That is, it is not inherently unreasonable or shoddy to look at a state in a set of possibilities, and ask, how many structured y/n questions in a context can be used to specify the state form other possibilities, and to reckon this an information metric. Where one may go on to assess relative symbol frequencies and make adjustments on H = -SUM pi log pi, etc. And by using a suitable dummy variable one may warrant that the information metric is functional and specific based on observations. Functionality of configured complex entities is commonly observed in all sorts of settings. Here is Bradley of the original team with Thaxton who about 20 years ago presented the following (I clip from App A my always linked, where it has been for years) to the ASA:
Cytochrome c (protein) -- chain of 110 amino acids of 20 types If each amino acid has pi = .05 [--> per front end chaining chemistry not after the fact chains that are seen as functioning and variations on the same fundamental protein], then average information “i” per amino acid is given by log2 (20) = 4.32 The total Shannon information is given by I = N * i = 110 * 4.32 = 475, with total number of unique sequences “W0” that are possible is W0 = 2^I = 2^475 = 10^143 Amino acids in cytochrome c are not equiprobable (pi =/= 0.05) as assumed above. If one takes the actual probabilities of occurrence of the amino acids in cytochrome c [--> thus an after the fact functional context in which the real world dynamic-stochastic process has been run through whatever real degree of common descent has happened], one may calculate the average information per residue (or link in our 110 link polymer chain) to be 4.139 using i = - SUM pi log2 pi [--> the familiar result] Total Shannon information is given by I = N * i = 4.139 x 110 = 455. The total number of unique sequences “W0” that are possible for the set of amino acids in cytochrome c is given by W0 = 2^455 = 1.85 x 10^137 . . . . Some amino acid residues (sites along chain) allow several different amino acids to be used interchangeably in cytochrome-c without loss of function, reducing i from 4.19 to 2.82 and I (i x 110) from 475 to 310 (Yockey) [--> again, after the fact of variations across the world of life to today,the real world Monte Carlo runs I spoke of] M = 2^310 = 2.1 x 10^93 = W1 Wo / W1 = 1.85 x 10^137 / 2.1 x 10^93 = 8.8 x 10^44 Recalculating for a 39 amino acid racemic prebiotic soup [as Glycine is achiral] he then deduces (appar., following Yockey): W1 is calculated to be 4.26 x 10^62 Wo/W1 = 1.85 x 10^137 / 4.26 x 10^62 = 4.35 x 10^74 ICSI = log2 (4.35 x 10^74) = 248 bits He then compares results from two experimental studies: Two recent experimental studies on other proteins have found the same incredibly low probabilities for accidental formation of a functional protein that Yockey found 1 in 10^75 (Strait and Dewey, 1996) and 1 in 10^65 (Bowie, Reidhaar-Olson, Lim and Sauer, 1990).
Now, we are actually dealing with hundreds of proteins from various families to make a living cell. In aggregate, the adjustments just seen in a simple case, do not make any material difference, the overall FSCO/I in a living cell, or just in the whole protein synthesis system is well beyond any reasonable reach of a blind watchmaker search process on the gamut of our observed cosmos. The only empirically warranted cause of FSCO/I is design. And FSCO/I is not after the fact target painting [just derange organisation a bit and see function vanish], it is as common as text in languages, or computer programs or gears in a train or wiring diagram based function at all sorts of scales from cell metabolism to petroleum refineries. It is readily observable and recognisable, and it is readily seen that reliably it is caused by design when we can directly see cause. So, per vera causa we are well warranted to infer it as a reliable sign of design. So, much of the above objectionism towards the design view is a case of straining at gnats while swallowing camels: selective hyperskepticism about what one is disinclined to accept multiplied by hypercredulity on any rays of faint hope for what is desired given the a priori evolutionary materialism and fellow travellers dominant in science and science education. Now, let me turn to that shoddy math by an IDiot, so-called. First, information can be quantified by reasonable metrics, as a commonplace of information theory and practice, including counting chains of y/n q's in a structured context, which is of course a bit value. (If you doubt me then kindly explain how AutoCAD works in terms of describing a configuration in a bit based file structure.) Multiply by a dummy variable that certifies warrant on functional specificity dependent on config relative to available alternatives. For practical purposes we can look at the possibilities and count them element by element, or we may if we wish modify based on how symbols appear in proportion after the fact. Makes little practical difference to the overall result. Then, use a threshold that makes sparse search constrained by atomic resources, here sol system, maximally implausible: Chi_500 = I*S - 500, functionally specific bits beyond a sol system threshold of blind search I is an info metric, S the warranting dummy variable, 500 bits the threshold, and if Chi_500 goes positive it is implausible that on the gamut of sol system, something with FSCO/I came about by blind mechanisms. Instead, the best explanation is intelligently directed configuration, aka design. Remember, we are aggregating hundreds of proteins. Let's take the Cy-C after the fact value and round down to 100 bits. Multiply by say 100. 10,000 bits. Well past 500 bits or even the 1,000 for the observable cosmos. Also, let's look at codes, which appear in strings -*-*-*- . . ., which are a linear node-arc pattern. These are therefore a subset of FSCO/I. The DNA codes for the proteins run at 3 x 4-state letters per AA codon. Six bits nominal, if we want to adjust by the halved Cy-C result, we are at about one bit per AA, 1/6 bit per base. In aggregate, we are looking at again, well past the limit. And while it is now a favourite pastime to try to pick holes in Dembski's 2005 metric model, I suggest again that if one simply reduced the logs, it is an info beyond a threshold metric. And, the probability that has become the focal point for objections of all sorts is an information value based on whatever happened in the real world with reasonable enough likelihood to be material. I suggest that info values from direct inspection of chains and possible states, or from working though a version of SUM pi log pi on observing the range of functional states in the world of life will make but little difference in the end, to the result. Especially given the island of function pattern imposed by multiple part interactive relevant function. If you cannot tell the difference between a pile of bricks and a functional castle -- as has happened here at UD in recent weeks -- then there is a problem here. I think on fair comment, some reconsideration is required, sir. I must go, having had to rebuild a comment after a keystroke wipeout. KF kairosfocus
gpuccio, We can use your very own test procedure to show that dFSCI is useless. Procedure 1: 1. Look at a comment longer than 600 characters. 2. If you recognize it as meaningful English, conclude that it must be designed. 3. Perform a pointless and irrelevant dFSCI calculation. 4. Conclude that the comment was designed. Procedure 2: 1. Look at a comment longer than 600 characters. 2. If you recognize it as meaningful English, conclude that it must be designed. 3. Conclude that the comment was designed. The two procedures give exactly the same results, yet the second one doesn't even include the dFSCI step. All the work was done by the other steps. The dFSCI step was a waste of time, mere window dressing. Even your own test procedure shows that dFSCI is useless, gpuccio. keith s
keiths, to gpuccio:
Your “test” is nothing more than this: 1. Look at a comment longer than 600 characters. 2. If you recognize it as meaningful English, conclude that it must be designed. 3. Perform a pointless and irrelevant dFSCI calculation. 4. Conclude that the comment was designed. Why not omit step 3, since it is useless? dFSCI adds absolutely nothing that wasn’t there already. You had already determined that the comment was designed, so your entire argument is circular: 1. If a 600-character comment looks designed, attribute dFSCI to it. 2. If it has dFSCI, conclude that it was designed. It’s amazing to me that you won’t let yourself see this, gpuccio.
gpuccio:
Because it is not useless. You could easily enough attain, in a random character generator, the phrase: “I am”. Which has perfect sense in English.
Read my step #1 again:
1. Look at a comment longer than 600 characters.
"I am" is not longer than 600 characters. keith s
keith s: "There is a cost, however. If you can’t defend your own idea, no one else has a reason to take it seriously." I will happily take that risk. gpuccio
keith s: The boolean part is very simple: If the numerical result is higher than an appropriate threshold for the system and time span, and I am not aware of any regularity or of any explicit algorithm in the system which can explain the sequence, I infer design. It is simple. I can do it. The only way it can be a fiasco is if it gives false positives. That would simply mean that it is a wrong procedure, not that it is circular. Empirical procedures, very simply, either work or do not work. So, again, falsify it. Offer a false positive. Is it possible that I have a simple practical procedure to detect design which, in your opinion, is circular, fiasco, whatever you like, and you can't falsify it empirically? Strange indeed. gpuccio
gpuccio,
There is a point where reasonable people must accept that they have different ideas. You don’t seem to believe that, and go on crying: “I am right. I win.”
I've made an argument showing that dFSCI has no scientific value. If you can't refute my argument, that's fine. We can leave it there. There is a cost, however. If you can't defend your own idea, no one else has a reason to take it seriously. keith s
gpuccio,
Wan’t the problem that dFSCI was circular? Please, decide.
The numerical part isn't circular, but it's useless. It's the boolean part that has significance, but your argument is circular. I've explained this already:
gpuccio, We’ve been over this many times, but the problem with your dFSCI calculations is that the number they produce is useless. The dFSCI number reflects the probability that a given sequence was produced purely randomly, without selection. No evolutionary biologists thinks the flagellum (or any other complex structure) arose through a purely random process; everyone thinks selection was involved. By neglecting selection, your dFSCI number is answering a question that no one is asking. It’s useless. There is a second aspect of dFSCI that is a boolean (true/false) variable, but it depends on knowing beforehand whether or not the structure in question could have evolved. You can’t use dFSCI to show that something couldn’t have evolved, because you already need to know that it couldn’t have evolved before you attribute dFSCI to it. It’s hopelessly circular. What a mess. The numerical part of dFSCI is useless because it neglects selection, and the boolean part is also useless because the argument that employs it is circular. dFSCI is a fiasco.
keith s
DNA_Jock: Thank you for your thoughts at #570. By the way, I apologize if I have been slightly "harsh", but that's what my bad character still considers "strong intellectual confrontation". I appreciate your intelligent answer even more for that. :) What can I say? You are a very reasonable person with, IMHO, a few very wrong ideas. No problem in that, and I would be happy if you could see me in a similar way (maybe without the "very", just not to feed my narcissism! :) ). And I don't want to sidetrack more than you would like. So, you feel free to stop the discussion wherever you want. You say: "But I see zero evidence to suggest that the mechanism for affinity maturation has a conscious source." OK, the evidence would obviously be of the ID type (complex functional information), but I never try to compute it for so complex systems. Let's say that I see zero evidence to suggest that the mechanism for affinity maturation evolved by RV + NS. Let's say that it is a pretty mysterious thing. "ABSOLUTELY not. If he were a hyper-intelligent shade of blue, I might have some thinking to do, but if he is the result of replication with variation, I have no trouble ascribing consciousness. It’s quite adaptive." You will maybe admit that there is some pre-commitment to a specific worldview here. No problem in that. I respect pre-commitments. I have mine too. But they are different. And I would definitely love a "hyper-intelligent shade of blue". OK, green would do, too! "I agree. Perhaps because I have a dog." I have three cats. How could we ever expect to understand each other! :) "I’m not the one making false inductions here, and neither are you. I was just pointing out the ridiculousness of the “universal experience” induction to infer a disembodied intelligence. The irony is that I only visited UD in order to point out to Kairosfocus his shoddy math. Sigh. I got involved in two side-tracks, one very interesting, the other, errr, not so much." This is a blog. You are free to choose your sidetracks (maybe you don't know it, but the only other point which I have debated here for a long time beyond ID proper is libertarian free will, in which I passionately believe). :) I must say that I have great respect for Dionisio and for his contributions here. His points about the complexity of regulatory procedures are absolutely valid, and he has offered a lot of very interesting references to recent literature about that in the "third way" thread. I have been deeply engrossed, in the last months, in what is known about epigenetic control of differentiation, and it is refreshing to study a biological argument which screams design and where RV + NS is rarely invoked explicitly, maybe because of some residual modesty. The only reason why I have not yet completed my "announced" post about the procedures is that the subject is so complex and fascinating that I am still studying and reflecting. gpuccio
keith s: "Your “test” is nothing more than this: 1. Look at a comment longer than 600 characters. 2. If you recognize it as meaningful English, conclude that it must be designed. 3. Perform a pointless and irrelevant dFSCI calculation. 4. Conclude that the comment was designed. Why not omit step 3, since it is useless? " Because it is not useless. You could easily enough attain, in a random character generator, the phrase: "I am". Which has perfect sense in English. So, see my post #400: "A Shakespeare sonnet. Alan’s comments about that are out of order. I don’t infer design because I know of Shakespeare, or because I am fascinated by the poetry (although I am). I infer design simply because this is a piece of language with perfect meaning in english (OK, ancient english). Now, a Shakespeare sonnet is about 600 characters long. That corresponds to a search space of about 3000 bits. Now, I cannot really compute the target space for language, but I am assuming here that the number of 600 characters sequences which make good sense in english is lower than 2^2500, and therefore the functional complexity of a Shakespeare sonnet is higher than 500 bits, Dembski’s UPB. As I am aware of no simple algorithm which can generate english sonnets from single characters, I infer design. I am certain that this is not a false positive." As you can see, there are reasons for my empirical threshold in this specific case, and those reasons derive from my concept of dFSCI. Which is neither pointless nor irrelevant. So much so, that you have not falsified it by offering a false positive to my simple procedure. As you can see, as soon as you try something different from repetition of statement of your victory, I answer. I respect arguments, even if wrong. Otherwise, see my post #536. gpuccio
keith s:
This is why the numerical value of dFSCI is irrelevant. Evolution isn’t searching for that specific target, and even if it were, it doesn’t work by random mutation without selection. By omitting selection, you’ve made the dFSCI value useless.
Wan't the problem that dFSCI was circular? Please, decide. For the "any possible function" objection, please see my post #400 here. I have discussed in detail the reasons why NS is not an explanation of protein's dFSCI, on many different threads. If you are backpedaling to this kind of argument, we can discuss it again. But I warn you: it is a biological argument, so you have to forget your silly sidetracks of "the designer can do it in so many other ways", or "dFSCI is circular", which have nothing to do with that. A good start would be: concede for the sake of discussion that there is no need to assume anything about the designer except that he is a conscious, intelligent, purposeful being, and that dFSCI is not circular and empirically works, and that the only way of falsifying the design inference for proteins is to demonstrate that the algorithm RV + NS can explain the functional complexity in them. Then explain briefly why you think that RV + NS can do the trick, and I will give you the reasons why they can't. OK? Otherwise, I quote myself (post #536): "There is a point where reasonable people must accept that they have different ideas. You don’t seem to believe that, and go on crying: “I am right. I win.” That’s fine. Go on." gpuccio
gpuccio, after describing his English-language comment "test" of dFSCI:
This is a challenge. My procedure works. It works not because is is a logical theorem. Not because I have hidden some keithian circularity in it (why should a circular procedure work, at all?). It works because we can empirically verify that it works.
You can't be serious. Can you? Your "test" is nothing more than this: 1. Look at a comment longer than 600 characters. 2. If you recognize it as meaningful English, conclude that it must be designed. 3. Perform a pointless and irrelevant dFSCI calculation. 4. Conclude that the comment was designed. Why not omit step 3, since it is useless? dFSCI adds absolutely nothing that wasn't there already. You had already determined that the comment was designed, so your entire argument is circular: 1. If a 600-character comment looks designed, attribute dFSCI to it. 2. If it has dFSCI, conclude that it was designed. It's amazing to me that you won't let yourself see this, gpuccio. keith s
gpuccio, to Learned Hand:
I will explain what is “simple, beautiful and consistent” about CSI. It is the concept that there is an objective complexity which can be linked to a specification, and that high values of that complexity are a mark of a design origin.
gpuccio, That is true for Dembski's CSI, but not your dFSCI. And as I pointed out above, Dembski's CSI requires knowing the value of P(T|H), which he cannot calculate. And even if he could calculate it, his argument would be circular. Your "solution" makes the numerical value calculable, at the expense of rendering it irrelevant. That's a pretty steep price to pay.
There are indeed different approaches to a formal definition of CSI and of how to compute it,
Different and incommensurable.
a) I define a specification as any explicit rule which generates a binary partition in a search space, so that we can identify a target space from the rest of objects in the search space.
Which is already a problem, because evolution does not seek out predefined targets. It takes what it stumbles upon, regardless of the "specification", as long as fitness isn't compromised.
b) I define a special subset of SI: FSI. IOWs, of all possible types of specification I choose those where the partition is generated by the definition of a function. c) I define a subset of FSI: those objects exhibiting digital information. d) I define dFSI the -log2 of the ratio of the target space / the search space.
This is why the numerical value of dFSCI is irrelevant. Evolution isn't searching for that specific target, and even if it were, it doesn't work by random mutation without selection. By omitting selection, you've made the dFSCI value useless.
e) I categorize the value of dFSI according to an appropriate threshold (for the system and object I am evaluating, see later). If the dFSI is higher than the threshold, I say that the object exhibits dFSCI (see later for the evaluation of necessity algorithms) To infer design for an object, the procedure is as follows: a) I observe an object, which has its origin in a system and in a certain time span. b) I observe that the configuration of the object can be read as a digital sequence. c) If I can imagine that the object with its sequence can be used to implement a function, I define that function explicitly, and give a method to objectively evaluate its presence or absence in any sequence of the same type. d) I can define any function I like for the object, including different functions for the same object. Maybe I can’t find any function for the object. e) Once I have defined a function which is implemented by the object, I define the search space (usually all the possible sequences of the same length). f) I compute, or approximate, as much as possible, the target space, and therefore the target space/search space ratio, and take -log2 of that. This is the dFSI of the sequence for that function. h) I consider if the sequence has any detectable form of regularity, and if any known explicit algorithm available in the system can explain the sequence. The important point here is: there is no need to exclude that some algorithm can logically exist that will be one day found, and so on. All that has no relevance. My procedure is an empiric procedure. If an algorithmic explanation is available, that’s fine. If no one is available, I go on with my procedure.
Which immediately makes the judgment subjective and dependent on your state of knowledge at the time. So much for objectivity.
i) I consider the system, the time span, and therefore the probabilistic resources of the system (the total number of states that the system can reach by RV in the time span). So I define a threshold of complexity that makes the emergence by RV in the system and in the time span of a sequence of the target space an extremely unlikely event. For the whole universe, Dembski’s UPB of 500 bits is a fine threshold. For biological proteins on our planet, I have proposed 150 bits (after a gross calculation).
Again, this is useless because nobody thinks that complicated structures or sequences come into being by pure random variation. It's a numerical straw man.
l) If the functional complexity of the sequence I observe is higher than the threshold (IOWs, if the sequence exhibits dFSCI), and if I am aware of no explicit algorithm available in the system which can explain the sequence, then I infer a design origin for the object. IOWs, I infer that the specific configuration which implements that function originated form a conscious representation and a conscious intentional output of information form a designer to the object.
In other words, you assume design if gpuccio is not aware of an explicit algorithm capable of producing the sequence. This is the worst kind of Designer of the Gaps reasoning. It boils down to this: "If gpuccio isn't aware of a non-design explanation, it must be designed!" keith s
570 DNA_Jock Is that the best you can do to comment on gpuccio's insightful post #569? Dionisio
#568 DNA_Jock
I was checking to see if there were any unanswered questions that you were willing to claim I ought to answer.
That's fine, the onlookers and lurkers can see which questions you didn't answer. Dionisio
#568 DNA_Jock
I think I’m done.
Well, what else can you do when you lack sound arguments to keep the discussion? :) Dionisio
567 DNA_Jock Is that the best you can do to answer questions? Dionisio
DNA_Jock:
(Off-topic: I believe I owe you an apology: from various things that Mung had said about you at TSZ, I had erroneously assumed that you had discussed PDZ on UD. My bad.)
Everything you read at TSZ is true. No need for an apology. Mung
gpuccio, Thought-provoking stuff.
Are you so sure that such a programming has no conscious source?
No, I am not sure. But I see zero evidence to suggest that the mechanism for affinity maturation has a conscious source. You seem to assume that I hold anthropocentric views that I do not, in fact, hold, viz:
if you met an alien who talks, designs, manifests cognition and feelings so that you can understand them, and communicates with you, would you doubt his conscious experiences only because he is not human?
ABSOLUTELY not. If he were a hyper-intelligent shade of blue, I might have some thinking to do, but if he is the result of replication with variation, I have no trouble ascribing consciousness. It's quite adaptive.
That understanding, feeling and love are not mere accidents of one species.
I agree. Perhaps because I have a dog.
So, attempts at trivializing that important concept with false inductions which have nothing to do with its substance have an unpleasant connotation of intellectual arrogance and reductionist dogma in them. With all respect.
I'm not the one making false inductions here, and neither are you. I was just pointing out the ridiculousness of the "universal experience" induction to infer a disembodied intelligence. The irony is that I only visited UD in order to point out to Kairosfocus his shoddy math. Sigh. I got involved in two side-tracks, one very interesting, the other, errr, not so much. DNA_Jock
DNA_Jock: Please, excuse my intrusion, but the inference to design by analogy in the presence of specific aspects in observed outcomes is not so trivial as you seem to make it. There is a reason why human consciousness can design things. It is an empirically observed fact, but there are other empirical observations which can explain how it happens. For example, I would argue that our ability to represent meaning, and our special capacity to have purposes, are the real source of our ability to design complex things. Meaning is the foundation of all human understanding, and meaning is a conscious experience. You cannot define it objectively, out of consciousness. The same is true for desire and purpose: they are feelings. You cannot define them in matter. Material systems do not understand meanings, and have no purposes. That's why they cannot generate new original dFSCI. You point to material algorithms which process information, like in the immune system, but these are not different from a computer software. Algorithms can compute, and they can generate meaningful outputs, but they don't understand what they do. The immune system is programmed to act that way, and it acts that way. Are you so sure that such a programming has no conscious source? The problem is, your equating conscious cognition with the human condition is rather keithian, and I must say that it is a small disappointment for me. Consciousness, meaning, cognition and purpose, and their product, design, are all realities of which we have ample experience in ourselves. Humans experience all those things. But equating all those things with being part of a specific species is rather narrow-minded. Again, I ask, as I have asked keith (if I remember well): if you met an alien who talks, designs, manifests cognition and feelings so that you can understand them, and communicates with you, would you doubt his conscious experiences only because he is not human? Humans have always understood that consciousness is more than human consciousness. That understanding, feeling and love are not mere accidents of one species. When we infer design, we infer a designer who is conscious, has understanding of meaning, and has purposes. We need not imagine him as human or anything else, because we know that those experiences, those qualities, are all that is necessary to design complex things. So, attempts at trivializing that important concept with false inductions which have nothing to do with its substance have an unpleasant connotation of intellectual arrogance and reductionist dogma in them. With all respect. gpuccio
Oh, honey, I know I don't have to answer any of your questions. I was checking to see if there were any unanswered questions that you were willing to claim I ought to answer. I think I'm done. DNA_Jock
Sorry. I was too busy laughing to pick up on the import of your final sentence in #561 there. I have included it as statement #4. It doesn’t change your fallacy, I’m afraid. Your argument now becomes 1. Humans are the only known creators of CFPOPI-processing systems (this is still true, however much you try to parse it) 2. We observe CFPOPI-processing systems in the camel immune system 3. Therefore humans are the creators of the camel immune system. 4. But we know humans are NOT the creators of the camel immune system. 5. Therefore we have made a mistake in our logic. Previously, your argument was Intelligence is the only known source of I-PS, therefore whenever there is I-PS, its origin must be intelligence. You backed away from this argument when I pointed out the corollary: Biology is the only known source of intelligence, therefore whenever there is intelligence, its origin must be biology. The argument that you want to make is as follows: 1. Humans, by virtue of their intelligence, are the only known creators of CFPOPI-processing systems 2. We observe CFPOPI-processing systems in the camel immune system 3. Therefore an intelligence is the creator of the camel immune system. 4. But we know humans are NOT the creators of the camel immune system. 5. Therefore a non-human intelligence is the creator of the camel immune system. But the same “universal observation” inductive reasoning leads us to conclude that all intelligences are made of food. DNA_Jock
#546 DNA_Jock
I answered many, but not all, of your questions. Some appeared to be entirely rhetorical. If there are particular questions that you think I ought to answer, but have not, please restate them.
Remember that you don't have to answer any of my questions. Still the onlookers and lurkers will see what I write and arrive at their own conclusions. :) Dionisio
#564 DNA_Jock Was that the best you could do to respond my question? You rewrote my original text in post #561 and completely changed its meaning. Is that a fair practice in your book? Did you do that by mistake, because you misread and misunderstood the text? Did you do that purposely, intentionally? Why? Do you do that frequently? Is that your way to engaging in serious discussions? :( Dionisio
Dionisio, Truly awesome!
Apparently you did not understand what I meant by “there’s only one known source of information-processing systems: intelligence”. This is about what is known to produce systems that process complex functional purpose-oriented prescriptive information. So far, we know that humans can create such systems.
Well, geez, why didn't you say so? [grin] Depending on your definition of complex (there are nearly as many definitions as there are IDists) then bacteriophage lambda process "complex functional purpose-oriented information". But to humor you, I will imagine that "prescriptive" has some particular meaning such that your statement "So far, we know [only] humans can create such systems". So your argument now has the form: 1. Humans are the only known creators of CFPOPI-processing systems 2. We observe CFPOPI-processing systems in the camel immune system 3. Therefore humans are the creators of the camel immune system. When in #551 I said "You might want to think that one through", I meant that you might want to think that one through. I then said : The problem is with the inductive nature of your “only one known source” argument. It's kinda cute that you omitted the "only" when you went from I-PS to CFPOPI-PS and switched from "intelligence" as the source to "humans". In other news, you might want to look at Fig 1 of Vernimmen. DNA_Jock
Learned Hand: I will explain what is “simple, beautiful and consistent” about CSI. It is the concept that there is an objective complexity which can be linked to a specification, and that high values of that complexity are a mark of a design origin. This is true, simple and beautiful. It is the only objective example of something which can only derive from a conscious intentional cognitive process. There are indeed different approaches to a formal definition of CSI and of how to compute it, and of how to interpret the simple fact that it is a mark of design. I have tried to detail my personal approach, mainly by answering the many objections of my kind interlocutors. And yes, there are slight differences between my approach and, for example, Dembski's, especially after the F. My approach is essentially a completely pragmatic formulation of the EF. In brief. a) I define a specification as any explicit rule which generates a binary partition in a search space, so that we can identify a target space from the rest of objects in the search space. b) I define a special subset of SI: FSI. IOWs, of all possible types of specification I choose those where the partition is generated by the definition of a function. c) I define a subset of FSI: those objects exhibiting digital information. d) I define dFSI the -log2 of the ratio of the target space / the search space. e) I categorize the value of dFSI according to an appropriate threshold (for the system and object I am evaluating, see later). If the dFSI is higher than the threshold, I say that the object exhibits dFSCI (see later for the evaluation of necessity algorithms) To infer design for an object, the procedure is as follows: a) I observe an object, which has its origin in a system and in a certain time span. b) I observe that the configuration of the object can be read as a digital sequence. c) If I can imagine that the object with its sequence can be used to implement a function, I define that function explicitly, and give a method to objectively evaluate its presence or absence in any sequence of the same type. d) I can define any function I like for the object, including different functions for the same object. Maybe I can't find any function for the object. e) Once I have defined a function which is implemented by the object, I define the search space (usually all the possible sequences of the same length). f) I compute, or approximate, as much as possible, the target space, and therefore the target space/search space ratio, and take -log2 of that. This is the dFSI of the sequence for that function. h) I consider if the sequence has any detectable form of regularity, and if any known explicit algorithm available in the system can explain the sequence. The important point here is: there is no need to exclude that some algorithm can logically exist that will be one day found, and so on. All that has no relevance. My procedure is an empiric procedure. If an algorithmic explanation is available, that's fine. If no one is available, I go on with my procedure. i) I consider the system, the time span, and therefore the probabilistic resources of the system (the total number of states that the system can reach by RV in the time span). So I define a threshold of complexity that makes the emergence by RV in the system and in the time span of a sequence of the target space an extremely unlikely event. For the whole universe, Dembski's UPB of 500 bits is a fine threshold. For biological proteins on our planet, I have proposed 150 bits (after a gross calculation). l) If the functional complexity of the sequence I observe is higher than the threshold (IOWs, if the sequence exhibits dFSCI), and if I am aware of no explicit algorithm available in the system which can explain the sequence, then I infer a design origin for the object. IOWs, I infer that the specific configuration which implements that function originated form a conscious representation and a conscious intentional output of information form a designer to the object. m) Why? This is the important point. This is not a logical deduction. The procedure is empirical. It can be applied as it has been described. The simple fact is that, if applied to any object whose origin is independently known (IOWs, we can know if it was designed or not, so we use it to test the procedure and see if the inference will be correct) it has 100% specificity and low sensitivity. IOWs, there are no false positives. IOWs, there is no object in the universe (of which we can know the origin independently) for which we would infer design by this procedure and be wrong. Now, I will do a quick test. There are 560 posts in this thread. While I know independently that they are designed things, for a lot of reasons, I state here that any post here longer than 600 characters, and with good meaning in English, is designed. And I challenge you to offer any list of characters longer than 600, as many as you like, where you can mix two types of sequences: some are true posts in good English, with a clear meaning, taken from any blog you like. Others will be random lists of characters, generated by a true random character generator software. Well, hear me! I will recognize all the true designed posts, and I will never make a falsely positive design inference for any of the other lists. Now, you can try any trick. You can add posts in languages that I don't know. You can add encryption of true posts that I will not recognize. Whatever you like. I will not recognize their meaning, and I will not infer design. They will be false negatives. You know, my procedure has low sensitivity. However, I will infer design for all the posts which have good meaning in English, and I will be right. And I will never infer design for a sequence which is the result of a random character generator. What about algorithms? Well, you can use any algorithm you like, but without adding any information about what has good meaning in English. IOWs, you cannot use the Weasel algorithm, where the outcome is already in the system. You cannot use an English dictionary, least of all a syntax correction software. Again, that would be recycling functional information, not generating it. But you can use an algorithm which generates sequences according to the Fibonacci series, if you like. Or an algorithm which takes a random character and generates lists with 600 same characters. Whatever you like. Because I am not using order as a form of specification. I am using meaning. And meaning cannot be generated by necessity algorithms. So, if I see a sequence of 600 A, I will not infer design for it. But for a Shakespeare sonnet I will. This is a challenge. My procedure works. It works not because is is a logical theorem. Not because I have hidden some keithian circularity in it (why should a circular procedure work, at all?). It works because we can empirically verify that it works. IOWs there could be sequences which are not designed, ad which are not obvious results of an algorithm, and which have high functional information. There could be. It is not logically impossible. But none of those sequences is known. They simply don't exist. In the known universe, of all the objects of which we know the origin, only designed object will be inferred as designed by the application of my procedure. Again, falsify this statement if you can. Offer one false positive. One. Except for... Except, obviously, for biological objects. They are the only known objects in the universe which exhibit dFSCI, tons of it, and of which we don't know the origin. But that is exactly the point. We don't know their origin. But they exhibit dFSCI. In tons. So, I infer design for them (or at least, for those which certainly exhibit dFSCI). Is any algorithm known explicitly which could explain the functional information, say, in ATP synthase? No. There is nothing like that. There is the RV + NS. But it cannot explain that. Not explicitly. Only dogma supports that kind of explanation. The simple fact is: both complex language and complex function never derive from simple necessity algorithms. You cannot write a Shakespeare sonnet by a simple mathematical formula. You cannot find the sequence of ATP synthase by a simple algorithm. Maybe we could do it by a very complex algorithmic search, which includes all our knowledge of biochemistry, present and future, and supreme computational resources. We are still very distant from that achievement. And the procedure would be infinitely more complex than the outcome, and it would require constant conscious cognition (design). Well, I have not been so brief, after all. Now, if there are parts of my reasoning which are not clear enough, just ask. I am here. Or, if you just want to falsify my empirical procedure, offer a false positive. I am here. More likely, you can simply join keith in the group of the denialists. But at least, you will know more now of what you are denying. :) gpuccio
#558 DNA_Jock #561 addendum RE: "systems that process complex functional purpose-oriented prescriptive information" you may read gpuccio's comments quoted in post #559 above. :) Dionisio
#558 DNA_Jock
Sure- if you say “there’s only one known source of information-processing systems: intelligence” this is fine so long as you include beavers, bees, termites, smile molds, yeast, and bacteriophage lambda, which are all sources of information-processing systems. Prions, not so much; they cause havoc in one particular type of information-processing system, one on which many people place an undue emphasis…
Apparently you did not understand what I meant by “there’s only one known source of information-processing systems: intelligence”. This is about what is known to produce systems that process complex functional purpose-oriented prescriptive information. So far, we know that humans can create such systems. Do we know anything else that can produce such systems? However, the discoveries in biology are pointing to that kind of systems. Our empirical evidence tells us that only intelligent agents have created that kind of systems, although we know humans did not create the biological systems we encounter in nature. What can you say about this? Dionisio
DNA_Jock: OK, it's been fun. gpuccio
#551 DNA_Jock
D: Has anyone proposed a comprehensive step-by-step description of how those biological orchestrations could have appeared? If you know of any, can you point to it? I would gladly read it.
Autodidactism can lead one astray; I would recommend taking a degree or two in biology.
Is that the best you can do to respond my question? Do you want to give it another try? Let me give you a hint. Here's something very interesting that gpuccio wrote in another thread today:
Indeed, what we see in research about cell differentiation and epigenomics is a growing mass of detailed knowledge (and believe me, it is really huge and daily growing) which seems to explain almost nothing. What is really difficult to catch is how all that complexity is controlled. Please note, at this level there is almost no discussion about how the complexity arose: we have really non idea of how it is implemented, and therefore any discussion about its origin is almost impossible. Now, there must be information which controls the flux. It is a fact that cellular differentiation happens, that it happens with very good order and in different ways in different species, different tissues, and so on. That cannot happen without a source of information. And yet, the only information that we understand clearly is then protein sequence information. Even the regulation of protein transcription at the level of promoters and enhancers by the transcription factor network is of astounding complexity.
Please, look at this paper: Uncovering Enhancer Functions Using the ?-Globin Locus. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4199490/pdf/pgen.1004668.pdft In particular Fig. 2. And this is only to regulate the synthesis of alpha globin in red cells, a very straightforward differentiation task.
So, I see that, say, 15 TFs are implied in regulating the synthesis of one protein, I want to know why, and what controls the 15 TFs, and what information guides that control. My general idea is that, unless we find some completely new model, information that guides a complex process, like differentiation, in a reliable, repetitive way must be written, in some way, somewhere. That’s what I want to know: where that information is written, how it is written, how does it work, and, last but not least, how did it originate? https://uncommondesc.wpengine.com/evolution/a-third-way-of-evolution/#comment-526476
Dionisio
dionisio, Sure- if you say "there’s only one known source of information-processing systems: intelligence" this is fine so long as you include beavers, bees, termites, smile molds, yeast, and bacteriophage lambda, which are all sources of information-processing systems. Prions, not so much; they cause havoc in one particular type of information-processing system, one on which many people place an undue emphasis... DNA_Jock
#551 DNA_Jock
Have you read “Signature in the Cell”?
No, I haven't. Have you?
What did you think of his treatment of the TDPS bootstrap problem?
Since I haven't read that material, I don't have any opinion on what it says. Why did you ask? Dionisio
#551 DNA_Jock
When you say “As far as I recall, there’s only one known source of information-processing systems: intelligence.”, I am inclined to agree with you, but only so long as we maintain a rather broad view of “intelligence”, that includes virtually all extant organisms, and viruses too. We can exclude prions.
I don't understand what you mean by that. Can you explain it more? Thank you. :) Dionisio
Crikey, that's even worse. You have moved from the "the experiment was designed" complaint to the "it isn't the experiment I would have done" complaint. I understand that you think otherwise, but you have not provided any basis for your argument that Lizzie's model, or Keefe's experiments, were not fit-for-purpose. Well, it's been fun. DNA_Jock
DNA_Jock; No. He could have selected anything, then he could have taken that "anything", described that "anything" and we would know that that "anything" can be found in a random library. And he could have taken that "anything", as it was in the original library, and tested it in any biological experiment to show that it is naturally selectable. So, it is not the “the experiment was designed” complaint. It is the "the experiment is a bad experiment, which means nothing" complaint. Unless the purpose was to show that protein engineering, in its bottom up form, works. But we already knew that. Therefore, no, I am sorry, but I can't see that this is deranged. Can't you see, instead, how deranged it is to propose an implementation of engineering as though it were a model of NS? I supposed that the only quality requested of a model is that it should model the thing it is supposed to model. "Model" means to have a similar form, similar properties, so that we can draw conclusions about the modeled thing from the model itself. All those things are violated in both our examples, and yet you and others still defend them as though they were true models. This is deranged. gpuccio
gpuccio, Would you have been satisfied with Keefe & Szostak's experiment if they had selected for biotin binding, which IS a biologically selectable function? Why, no, you wouldn't. Because, as you put it,
How does Szostak decide that he will work on the weak binding for ATP, and not on any other random sequence? Because he knows that ATP binding is what he wants to obtain. Because he is a designer.
This is the "the experiment was designed" complaint. You are stating that you would only be satisfied if A) the sequence has a biologically selectable function and B) the experimental design does not test for any particular biologically selectable function. Such conditions only apply in the field (where, of course, they have been observed: e.g. vpu) They cannot apply in any experiment. Can you not see that this is deranged? DNA_Jock
Learned Hand, to gpuccio:
Dembski made P(T|H), in one form or another, part of the CSI calculation for what seem like very good reasons. And I think you defended his concept as simple, rigorous, and consistent. But nevertheless you, KF, and Dembski all seem to be taking different approaches and calculating different things.
That's right. Dembski's problems are that 1) he can't calculate P(T|H), because H encompasses "Darwinian and other material mechanisms"; and 2) his argument would be circular even if he could calculate it. KF's problem is that although he claims to be using Dembski's P(T|H), he actually isn't, because he isn't taking Darwinian and other material mechanisms into account. It's painfully obvious in this thread, in which Elizabeth Liddle and I press KF on this problem and he squirms to avoid it. Gpuccio avoids KF's problem by explicitly leaving Darwinian mechanisms out of the numerical calculation. However, that makes his numerical dFSCI value useless, as I explained above. And gpuccio's dFSCI has a boolean component that does depend on the probability that a sequence or structure can be explained by "Darwinian and other material mechanisms", so his argument is circular, like Dembski's. All three concepts are fatally flawed and cannot be used to detect design. keith s
To address Dionisio's #533
Dionisio wrote: D: One very interesting thing about questions like the ones posted on #512 is that their correct answers lead to deeper questions, which eventually point to an elaborate information-processing system that’s a real delight for any passionate computer scientist or engineer. Do you understand this? [No emphasis in original]
DNAJ replied: Yes. One of the cool things Robert M. Pirsig points out in “Zen and the Art…” is that the more tests you do, the more hypotheses increase in number. Makes science a lot of fun, if rather poorly remunerated. I see you opted for the buns [Emphasis in original]
To which Dionisio replied: Apparently you did not understand what I wrote. That’s fine. Let’s try it again. As serious researchers dig into the biological systems, while trying to answer outstanding questions, they discover and report elaborate choreographies of information-processing mechanisms (regulatory networks, signaling pathways, epigenetics, proteomics, the whole nine yard), and newer questions arise. As far as I recall, there’s only one known source of information-processing systems: intelligence. There are “chicken-egg” questions associated with the observed systems. Has anyone proposed a comprehensive step-by-step description of how those biological orchestrations could have appeared? If you know of any, can you point to it? I would gladly read it. Did you understand this now? [Emphasis in original]
No, Dionisio, I understood what you wrote quite well the first time. What I did not realize is that you expected me to associate the phrase “elaborate information-processing system” with something that could not arise via evolution. I understood your statement, and I agreed with it. By the same token, I completely agree with the first sentence of your re-statement : “As serious researchers…newer questions arise”. Have you read “Zen and the Art…”? I think you would enjoy it. When you say “As far as I recall, there’s only one known source of information-processing systems: intelligence.”, I am inclined to agree with you, but only so long as we maintain a rather broad view of “intelligence”, that includes virtually all extant organisms, and viruses too. We can exclude prions. Before you start over-concluding, I feel I should warn you that, as far as I recall, there’s only one known source of intelligence: biology. Therefore any conclusion you wish to infer about the need for intelligent intervention in order to explain the origin of complex systems will apply equally to the need for biology in order to explain the origin of intelligence. You might want to think that one through. The problem is with the inductive nature of your “only one known source” argument.
There are “chicken-egg” questions associated with the observed systems.
I can remember being taught about the huge “chicken-egg” problem (you’ll appreciate the fact that we termed it a “bootstrap problem”) presented by template-directed protein synthesis. Have you read “Signature in the Cell”? What did you think of his treatment of the TDPS bootstrap problem?
Has anyone proposed a comprehensive step-by-step description of how those biological orchestrations could have appeared? If you know of any, can you point to it? I would gladly read it.
Autodidactism can lead one astray; I would recommend taking a degree or two in biology. Although, if you are correct when you say that your IQ is about the same as your age, this won’t help much. DNA_Jock
DNA_Jock: I havbe not much time, so for the moment I will address only a couple of mre relevant points: a) I admit that I was not very precise when I said: "ATP binding “is not a function at all”". I should have said: "ATP binding in itself is not a naturally selectable function in a biological context, neither in its "strong" form (the final protein, see what happened when they tried to inject it in living organisms), nor, least of all, in the original weak affinity form. Sometimes I write too quickly. You certainly know that anything can be defined as a function, according to my approach, so ATP binding too can be defined as a function. Obviously, in its weak form, it is not a very complex function, because it is rather likely in random sequences. We all agree on that. The important point is that it is not a naturally selectable function. A weak affinity for ATP does not confer any reproductive advantage in any known context. So, I suppose this brings us to the last point about the difference between IS and NS. And here it is. I must seriously disagree with what you say about that final point. First of all, let's clarify that IS is not a process of selection where intelligent agents intervene to select each time. IS is usually a process where intelligent agents have defined what to select, how to select it, and then the selection can certainly be algorithmic, like in my example of antibody maturation. Therefore, your objections like: "Human beings did not “select” anything, except of course the conditions under which the experiment was performed. They did NOT go in and hand-pick binders." or: "No intelligent agent intervenes in the process" are completely inappropriate and out of order. So, what is the difference between IS and NS? Easy. NS is a process where no intelligent agent decides what will be selected, and the selection (which indeed should not be called selection, but that is not important) is made by what already exists in the system, and what exists in the system was not algorithmically set by any intelligent agent with any awareness of anything to be selected. In the form which interests our debate, NS happens only because there are self-replicators which compete for resources, and therefore the replicators which have a reproductive advantage are expanded (positive selection) while those who lose reproductive fitness are eliminated (negative selection). IOWs, the trait which defines NS is that what is selected is reproductive success. Anything else can be selected only indirectly, if it confers reproductive success. So, if you want to model NS you need to generate a model where the variation confers an advantage of itself, in an environment which has not been set to measure anything in particular, and to react to that measurement by expanding anything. That's what I mean when I say that, in NS, the variation must be selected "on its own merits". In IS, on the other hand, there is a prior definition of the function, or of the type of function, and then a measurement system is set up to measure the defined function. This is a very important point. The measurement system can be set to measure the desired function at any level, even at very low levels. That's what Szostak has done. So, we can recognize functions which would never be recognized "on their own merits", in any natural context. And the second important point is, the system is set up to generate a continuous pathway to the final result. Szostak selects any level of ATP binding, amplifies and mutates the selected result, and then re-selects any increase in the binding. He is willfully generating a continuous gradient of the desired function, where any increase is selected and amplified, any decrease is eliminated. Elizabeth does the same thing. This works. But it is not NS. And it is not a model of NS. You should understand that. You cannot model one thing with another thing which has completely different properties and completely different behaviour. How does Szostak decide that he will work on the weak binding for ATP, and not on any other random sequence? Because he knows that ATP binding is what he wants to obtain. Because he is a designer. How does Szostak measure the ATP binding? By an intelligent measurement system. Not by any advantage that the ATP binding confers in any natural system. IS measures what it wants to measure, at the level it wants to measure it. And IS actively acts on the results. It expands what has been considered "good", not on its own merits, but in the judgement of the intelligent agent or of the algorithm he has created. And varies it and selects it again. Szostak implements mutagenic PCR on the selected sequences, and then again, and then again. Expansion and variation and measurement and again expansion. Of what? Of what is considered the purpose. Any time the result is nearer to the desired purpose. Even is the result is only a little bit nearer to the purpose, it is selected, and expanded, and again selected. It's easy to do that. And what does he attain in the end? A proteins which binds ATP. Strongly. Which is exactly what he wanted to obtain. Which is still a protein which has no useful function in a biological context, and which cannot be naturally selected in a biological context (indeed, it can easily be negatively selected as an ATP depriver). You say, of Lizzie's example: "Strings with higher scores are more likely to be copied. Neither the program nor its designer has any clue about what a ‘successful” string might look like." No. Strings with higher score are copied more likely because Lizzie has decided that way. Strings with higher score have no useful information of tehir own in a natural context, and the score cannot be measured by any natural context. This is not NS, nor any valid model of it. This is a fundamental flaw of all these "models" of NS. The problem is not that they are designed, as you seem to believe my argument is. The problem is that they are designed as examples of IS, and then they are erroneously considered "models" of NS. They are not. So, I fully maintain my statement. Lizzie’s “Creating CSI with NS” should be called “Creating CSI with design by IS”. If you don't agree, please exp'lain in what sense Lizzie's example would be a model of NS, and not simply a trivial implementation of IS and engineering. Szostak's paper has exactly the same problem. He should be interested only in one thing: can we find, in random libraries, sequences that can be shown to be naturally selectable? His paper tells nothing about that. It just tells us that we can find in random libraries sequences which can be intelligently engineered, even if results which are not especially useful or impressing. gpuccio
KF, I see that you've had time to drop a few replies and write a characteristically long "FYI-FTR." Could you take a few seconds and respond to my repeated question above? What basis is there for asserting that Orgel and Dembski are using the same notion of "complexity?" (If you had opened comments on the latest FYI-FTR, I would also like to know your thoughts on Dembski's declaration that he had "pretty much dispensed with the EF," due to logical flaws, and his subsequent reinstatement of it on grounds that "critics [were] crowing about the demise of the EF." He never seemed to address the logical flaws he acknowledged crippled the EF in the first place; have you?) Learned Hand
Incidentally, gpuccio, I do appreciate your responses. I realize that I'm asking you to revisit things you probably feel that you've discussed exhaustively in the past. Thank you for your forbearance. Given how often CSI is discussed, do you not think it's oddly difficult to find examples of it being calculated? Do you think, with an hour to search, you could find more than a dozen? I suspect less than that; I searched a while back, and could just find a few serious attempts. It's like pulling teeth just to get someone to explicitly lay out their method and do the calculation. And when they do, everyone seems to have their own pet approach to the problem. I don't see what's "simple, beautiful and consistent" about CSI. Learned Hand
Learned Hand:
“Remember that there are a couple of assertions on the table: gpuccio’s claim that CSI is a beautiful, strong, consistent concept, and now yours that it’s a routine calculation. Neither claim is supported by the poor showings to date. If it’s so easy to calculate, then please calculate it rather than declaring that you’ve done so and listing a few parts of the calculation.”
Have I missed your comments to my posts #360 and #400 here? If they are “poor showings”, please explain why. Or simply admit that I have calculated what you ask me to calculate. You can agree or not with the calculation, but not go on saying that I have not done it.
Thanks for the response. I didn't respond to your comment at #360 because it (a) was addressed to someone else, and (b) doesn't calculate anything or refer to any explicit calculations. It just lists a series of things that you say were and weren't designed. Did you mean a different comment? Because if you did mean that #360 is responsive to my request that you show the work behind a CSI calculation, then yes, it's an extremely poor showing--there are no calculations there. Your comment #400 does sketch out a basic approach to declaring that specified complexity exists, although it's still not what I've been asking for--can you please simply give your approach? Just the formula you're using would be helpful, especially since it's difficult for me to work backwards and determine it from your casual discussion of the results of your calculations. I'm interested in comparing it to Dembski's, because you aren't following the same procedure as far as I can tell. Possibly I'm just misunderstanding your approach, though. It looks to me like you're disclaiming the need to consider non-design hypotheses when you say, "Obviously, both for ATP synthase and histone H3 I am aware of no algorithmic explanation for their origin. Can you give one?" (No, I can't. I wouldn't know where to start.) While I'm no mathematician, I think it's safe to say that not knowing what a term should be is not a sufficient reason to ignore it. Dembski made P(T|H), in one form or another, part of the CSI calculation for what seem like very good reasons. And I think you defended his concept as simple, rigorous, and consistent. But nevertheless you, KF, and Dembski all seem to be taking different approaches and calculating different things. If CSI is "simple, beautiful and consistent," why are there so many versions of it, and why is it like pulling teeth to get someone to explicitly demonstrate a calculation? It can be done, I'm sure I've seen people try to walk through the calculations before. But doing so seems to make IDists deeply uneasy, especially because it draws very pointed criticisms identifying flaws in both the concept and the execution (most obviously, from my perspective, the casual dismissal of the P(T|H) problem). Learned Hand
Dionisio, I answered many, but not all, of your questions. Some appeared to be entirely rhetorical. If there are particular questions that you think I ought to answer, but have not, please restate them. I will address your post #533. My responses to you may be a little slower coming than my responses to gpuccio. I hope you understand why. DNA_Jock
#541 KS, the dFSCI metric represents the informational content of an entity, rooted in the number of coded y/n q’s to specify state in a communicative context. Source, encoder, decoder, application, code system, physical expression etc. In that context, per config space scope vs sparse possible blind search, it becomes maximally implausible that such would be able to find islands of relevant function, WITHOUT need to define or work out precise calculated probability values. KF
#542 D-J, The just above will also be helpful for you. KF
Not at all helpful. My questions were: Are [you] quite comfortable with Durston’s assumption that the exploration of insulin’s aa sequence has been a random walk, without any intervention? Every single one of your calculations of p(T|H) and related alphabet soup rely on the assumption of independence (as you have admitted) and that this assumption is false (as both you and Durston have admitted). You assert that the error is “not material”. How big is the error? How do you know? Your posts at 497, 541 and 542 were relatively concise (542 admirably so), but non-responsive. Please be as precise and concise as you can while answering these questions. DNA_Jock
gpuccio, As I understand it, you are making three, potentially related, complaints about Keefe & Szostak. 1. There were no strong ATP binders in the original library. 2. Strong ATP binders only arose after “intelligent selection” had been applied 3. ATP binding “is not a function at all” Complaint # 1 “There were no strong ATP binders in the original library.” This is a feature, not a bug. If the initial library had contained strong L-binders, and RM+S had not been able to improve on them, we would be drawn to conclude that L-binders were rather easy to come by, and that RM+S wasn’t much use at improving them. Instead, the result was much more interesting: weak L-binders were found at a frequency that makes them accessible to an unguided stochastic process, AND many of these L-binders could be improved, often dramatically, by RM+S. I will deal with complaint # 2 last, since it ties in with your comments re “Creating CSI with NS”. Complaint # 3 ATP binding “is not a function at all” While your statement is untrue, it can be re-phrased to form a seemingly reasonable objection, i.e. “ATP is not a very interesting function, I want to see you evolve an enzyme activity.” If you think carefully about how their experiments work, you will be able to figure out for yourself why this is an inherent limitation of the method, but it is NOT a limitation that would apply in nature. This is what I said originally (#245 of the elephant thread) regarding the technology described in Keefe & Szostak:
Where we disagree, AFAICT, is your insistence that RV and NS must be considered separately AND that no NS can act until there is a selective advantage that is a “fact”, meaning it has been demonstrated to be operative (and, you seem to imply, historically accurate?) by evidence that you personally find clear and convincing. I, OTOH, am willing to posit small selective advantages for simpler, poorly optimized polymers, and try to investigate what these rudimentary functionalities might look like. And the experimental data on protein evolution supports me here: in particular, Phylos Inc demonstrated that using libraries of sizes of ~ 10^13 (e.g. USP 6,261,804), you could evolve peptides that bound to pretty much ANYTHING. Unfortunately, I can’t get much more specific, but here’s a “statement against interest”: the libraries produced better binders if the random peptide was anchored by an invariant ‘scaffold’. They used fibronectin, but I suspect that a bit of beta sheet at each end of the random peptide would have done the trick. They also had a technical problem in optimizing catalysis, but that limitation would not apply in actual living systems. [Emphasis in original]
Bottom up studies like Keefe’s are the only way to explore the frequency of “the shores of the islands of function” in protein space (that I have heard of). Studies like McLauglin explore the degree to which functional protein space is interconnected via single steps near an optimum. Durston asks “how broad is the peak?”, a question of secondary relevance, at best. Axe doesn’t explore anything; the paper is based on a glaring fallacy. See my attempt to explain this, inter alia, to Mung. Wordpress is mangling my attempts to provide you with a linkout. please enter "http://theskepticalzone.com/wp/?p=1472&cpage=7#comment-19065" in your browser. Dr. Axe is represented by “Dr. A” -- I’m a subtle guy. (Off-topic: I believe I owe you an apology: from various things that Mung had said about you at TSZ, I had erroneously assumed that you had discussed PDZ on UD. My bad.) There is not any inconsistency between, to use your terms, the forward data and the reverse data: Keefe’s forward data are compatible with McLauglin’s and Durston’s reverse data. You may be mis-understanding Durston’s data. Axe himself is mis-understanding his own data. Finally, complaint #2: Strong ATP binders only arose after “intelligent selection” had been applied. Your complaint against “intelligent selection” is fundamentally flawed. If you wish to argue that a particular model, or a particular experiment, does not accurately reflect the process that it purports to model and/or test, then you need to explain, with supporting data, why you believe this to be the case. Merely complaining that “the experiment was designed” or “the solution was smuggled in” does not cut it. Of course the experiment was designed! The question is rather “Was it appropriately designed?” In the case of Keefe & Szostak, they performed random mutation, and then selection for binding, achieved by letting the entire mix stick to immobilized ligand, washing and eluting. Human beings did not “select” anything, except of course the conditions under which the experiment was performed. They did NOT go in and hand-pick binders. Similarly your complaint that Lizzie’s “Creating CSI with NS” should be called “Creating CSI with design by IS”
IS requires a conscious intelligent agent who recognizes some function as desirable, sets the context to develop it, can measure it at any desired level, and can intervene in the system to expand any result which shows any degree of the desired function. IOWs, both the definition of the function, the way to measure it, and the interventions to facilitate its emergence are carefully engineered. It’s design all the way. On the contrary, NS assumes that some new complex function arises in a system which is not aware of its meaning and possibilities, only because some intermediary steps represent a step to it, and through the selection of the intermediary steps because of one property alone: higher reproductive success. So, I ask a simple question: what reproductive success is present in Lizzie’s example? None at all. It’s the designer who selects what he wants to obtain. The property selected has no capability at all to be selected “on its own merits
Strings with higher scores are more likely to be copied. Neither the program nor its designer has any clue about what a ‘successful” string might look like. No intelligent agent intervenes in the process. Just like antibody affinity maturation, which you also used as an example of IS. Some parts of the gene mutate at a higher rate than others, therefore “intelligence”. Really? DNA_Jock
KS, with all due respect, your insistently repeating errors does not make them into truth. I have said enough long since for a reasonable person to see just why FSCO/I -- which includes dFSCI -- is real, is observable and recognisable, is quantifiable based on observable characteristics, and why it is maximally unlikely to result from blind chance and mechanical necessity but is per trillions of directly observed cases in point of its cause, a reliable sign of design. (The just above linked is yet another explanation for those who needed it, in reply to your caricaturing of Newton, which I just could not let pass, as one trained in my discipline.) I have said enough for the reasonable man, so I need not elaborate further regardless of drumbeat repetition of erroneous assertions and insinuations. Gotta go now, errands to run for She Who Must be Obeyed. KF kairosfocus
D-J, The just above will also be helpful for you. KF kairosfocus
KS, the dFSCI metric represents the informational content of an entity, rooted in the number of coded y/n q's to specify state in a communicative context. Source, encoder, decoder, application, code system, physical expression etc. In that context, per config space scope vs sparse possible blind search, it becomes maximally implausible that such would be able to find islands of relevant function, WITHOUT need to define or work out precise calculated probability values. KF PS: here, will be relevant. kairosfocus
"No evolutionary biologists thinks the flagellum (or any other complex structure) arose through a purely random process; everyone thinks selection was involved." Its such a ridiculous point when evolutionists say this. In other words, it is completely random, but remember, some organisms died. phoodoo
Strange how evos attack ID methodology seeing that they don't use any methodology beyond bald declaration. Joe
keith s:
The dFSCI number reflects the probability that a given sequence was produced purely randomly, without selection.
Prove it.
No evolutionary biologists thinks the flagellum (or any other complex structure) arose through a purely random process; everyone thinks selection was involved.
Yet natural selection has proven to be impotent. But anyway what is the methodology those evolutionary biologists used to determine that unguided evolution could produce a bacterial flagellum. Please be specific, that way we can compare methodologies. Joe
DNA_Jock RE: #535 addendum Lately we see mathematicians, electrical engineers and computer science professionals involved in multidisciplinary teams, working on important biology-related research projects at different institutions. That's why we look forward, with much anticipation, to reading newer reports coming out of research. Because they shed more light on the elaborate cellular and molecular choreographies observed in the biological systems. These days it's quite fascinating to closely follow what is going on in biology. Let's enjoy it! :) Dionisio
keith s: Just to be clear. I have already answered your "objections". I will not do it again. You seem to love repetitions. I don't. There is a point where reasonable people must accept that they have different ideas. You don't seem to believe that, and go on crying: "I am right. I win." That's fine. Go on. gpuccio
DNA_Jock RE: #534 addendum You may want to keep in mind what is written in post #525. :) Dionisio
DNA_Jock RE: #533 addendum Here's an example of a very interesting scientific report and some of the new questions that arise while carefully reading it: https://uncommondesc.wpengine.com/evolution/a-third-way-of-evolution/#comment-525809 Note that there are many examples like this. Over 550 just in this thread: https://uncommondesc.wpengine.com/evolution/a-third-way-of-evolution/ Can you answer the questions in post #533? Thank you. :) Dionisio
#522 DNA_Jock
D: One very interesting thing about questions like the ones posted on #512 is that their correct answers lead to deeper questions, which eventually point to an elaborate information-processing system that’s a real delight for any passionate computer scientist or engineer. Do you understand this?
Yes. One of the cool things Robert M. Pirsig points out in “Zen and the Art…” is that the more tests you do, the more hypotheses increase in number. Makes science a lot of fun, if rather poorly remunerated. I see you opted for the buns. :)
Apparently you did not understand what I wrote. That's fine. Let's try it again. As serious researchers dig into the biological systems, while trying to answer outstanding questions, they discover and report elaborate choreographies of information-processing mechanisms (regulatory networks, signaling pathways, epigenetics, proteomics, the whole nine yard), and newer questions arise. As far as I recall, there's only one known source of information-processing systems: intelligence. There are "chicken-egg" questions associated with the observed systems. Has anyone proposed a comprehensive step-by-step description of how those biological orchestrations could have appeared? If you know of any, can you point to it? I would gladly read it. Did you understand this now? :) Dionisio
gpuccio, We'd like to see a calculation of some quantity -- under whatever acronym you like -- the presence of which would demonstrate, in a non-circular way, that the structure or sequence in question could not have been produced by evolution or other nonintelligent natural processes. That is what CSI was touted to do, that is what you claim for dFSCI, and that is what KF claims for FSCO/I. dFSCI cannot do what you claim for it, as I explained in my previous comment. That you choose not to defend it is not to your credit. If you don't think it's worth defending, then no one else will either. keith s
keith s: Very briefly: a) "We’ve been over this many times." Correct. And I will not go back again to "discussing" it with you. Whoever is interested can easily find my many detailed arguments about dFSCI spread on this blog. I must apologize for saying this, but I really don't consider you a serious interlocutor. Just some person who likes state "I am right and I win" as a mantra. OK, go on. b) "the problem with your dFSCI calculations is that the number they produce is useless" Well, at least you admit that I have calculated dFSCI. You just don't agree that my calculations are valid. That's fine. The next time that Learned Hand or anyone else on your side comes back with the false statement that I have never calculated dFSCI, and that I am afraid to do it, can I just mention you, and say that "even keith s admits that I have calculated dFSCI, even if he does not consider my calculations valid"?. Maybe that would be of some help. You must certainly be held in very high esteem on your side, such a fine bomber. :) gpuccio
gpuccio, We've been over this many times, but the problem with your dFSCI calculations is that the number they produce is useless. The dFSCI number reflects the probability that a given sequence was produced purely randomly, without selection. No evolutionary biologists thinks the flagellum (or any other complex structure) arose through a purely random process; everyone thinks selection was involved. By neglecting selection, your dFSCI number is answering a question that no one is asking. It's useless. There is a second aspect of dFSCI that is a boolean (true/false) variable, but it depends on knowing beforehand whether or not the structure in question could have evolved. You can't use dFSCI to show that something couldn't have evolved, because you already need to know that it couldn't have evolved before you attribute dFSCI to it. It's hopelessly circular. What a mess. The numerical part of dFSCI is useless because it neglects selection, and the boolean part is also useless because the argument that employs it is circular. dFSCI is a fiasco. keith s
Learned Hand: "Remember that there are a couple of assertions on the table: gpuccio’s claim that CSI is a beautiful, strong, consistent concept, and now yours that it’s a routine calculation. Neither claim is supported by the poor showings to date. If it’s so easy to calculate, then please calculate it rather than declaring that you’ve done so and listing a few parts of the calculation." Have I missed your comments to my posts #360 and #400 here? If they are "poor showings", please explain why. Or simply admit that I have calculated what you ask me to calculate. You can agree or not with the calculation, but not go on saying that I have not done it. gpuccio
Sorry for not replying earlier, since we last spoke I've flown to Hong Kong for work. I wish I could share the view from my room's window. I'll be here for a week or so; obviously I won't be very responsive given the time difference. KF, keiths is right. (I can't not read your user name as a plural.) I'm looking for the explicit calculation, particularly so that I can compare it to Dembski's. Remember that there are a couple of assertions on the table: gpuccio's claim that CSI is a beautiful, strong, consistent concept, and now yours that it's a routine calculation. Neither claim is supported by the poor showings to date. If it's so easy to calculate, then please calculate it rather than declaring that you've done so and listing a few parts of the calculation. Once again, it's striking from a critic's perspective how little perspective IDists have on their own standards. They claim that CSI is a phenomenally powerful concept that can revolutionize science, that it's a robust and beautiful tool that can reliably detect design with no false positives, that it doesn't need to be tested, that it has been tested, that it can't be tested, that it's used every day... but they have the hardest time actually saying, "This is how I calculated CSI in this specific case, step-by-step." Skipping that part makes it obvious, at least to those outside the ideology, how poor a tool CSI actually is. And then, of course, there's the total failure of IDists to ever (literally ever, as far as I can tell) use CSI to detect design in the real world under controlled circumstances. It seems only to work in cases where design is either known in advance (Shakespeare) or assumed on faith (flagella). Why can't it be used to distinguish white noise from radio communications? Or determine whether the latest variant of the Ebola virus is a natural strain? Or to break codes, or detect steganography, or distinguish between data and noise in hard drive recovery, or answer any of Elsberry and Shallit's "eight challenges"? I think the answer is clear: testing CSI puts the concept at risk, since it might fail. That risk is unnecessary, since IDists don't require that their tools be tested; they're promoted on faith and logic (albeit logic under the heavy guidance of motivated reasoning), rather than empirical success. I'm sure Dembski and other creationists would love to shout from the rooftops that their tools have proven to be successful and useful, especially since the secular world would take up and use a productive tool, thus making it impossible for skeptics to ignore. But they are notably shy about dipping their toes in that water. I think it's because they know the risk is untenable--they understand as well as their critics do that the tools just don't work. That's obviously not an opinion shared by everyone here, but although I've asked several times, why don't IDists test these tools?, I haven't heard any strong answers. (In fact, the only answer I've heard is from WJM, who argued simply that skeptics wouldn't believe the results. I'm not sure he's wrong about that, but I am sure that's a terrible reason not to prove that your groundbreaking idea actually works. It has shades of Dembski's bizarre double-retraction of the explanatory filter, in which he declared the tool was valid seemingly because people were mocking him rather than because he'd overcome its crippling logical defect.) Finally, I note that you are still claiming that "specified complexity was first observed and stated on the record by Orgel in 1973." But that's starting to strain your credibility; as I've pointed out, the Orgel cite plainly uses a different definition of "complexity" than Dembski does. Orgel's "specified complexity" can hardly be the same as Dembski's if they're talking about different concepts of "complex." I've asked you why you think they're the same; if you've answered, I don't see it in this thread. Is there some reason to think they're the same? Learned Hand
kairosfocus, A reminder:
kairosfocus:
LH: I gave an outline of an absolutely routine way to measure functionally specific info, with a guide as to how it would address the flagellum.
KF, Learned Hand asked for an “explicit calculation”, not an “outline”. You say the calculation is “absolutely routine”. Then perform it! Surely you are competent enough to perform an “absolutely routine” calculation, aren’t you? Show us your explicit and “absolutely routine” calculation of how much “functionally specific info” is contained in the bacterial flagellum. P.S. Dembski’s CSI argument is circular, as I explained above. If you disagree, you need to show where my argument fails, rather than tossing out distractive red herring talking points designed to polarise and confuse the atmosphere. Please do better.
keith s
DNA_Jock: Here is another brief comment I made recently (to Alan Fox): "Of the Szostak paper, you alredy know what I think. It is essentially a false paper, at least if interpreted as an estimate of the occurrence of functional proteins in a random library. As I have said many times, the ATP binding protein which they describe, and which however is not functional at all in a biological context, least of all naturally selectable in any true scenario, was not in the original random library, but is the result of intelligent selection. What was in the original random library were a few sequences with some very weak affinity for ATP, certainly trivial in any real biochemical context. Period. Still, the third paper you reference quotes the Szostak paper as a demonstration that “the frequency of occurrence of functional proteins in a randomized library has been estimated to be about 1 in 1 : 10^11?. So, this false conclusion is propaganda for the neo darwinist field." About the powers of Intelligent Selection, I have often offered the very interesting example of antibody affinity maturation after the first immune response to new epitopes. Here we have a beautiful example of an algorithm embedded in the immune system which can take an existing function selected from a relatively random repertoire (the basic antibody repertoire) and optimize it rather quickly (a few months) by targeted random mutations and Intelligent Selection which uses the environmental information in the epitope. A further proof of what Intelligent Selection can do, with its added information and algorithmic power. gpuccio
DNA_Jock You have not answered all the questions I have asked you. You don't have to answer them, but remember there are more lurkers that commenters in this thread. Don't you care about the impression they will get from following this discussion? :) Dionisio
DNA_Jock: "The wall was always there. The bullet-holes arose before humans existed (by three or more days). :) Biochemists arrive. They observe bullet holes, and paint circles around the bullet holes that they observe. They may debate how big a circle they should draw. That this is, in fact, the sequence of events is highlighted by your observation of the biochemist that discovers a new enzyme activity: “Look, a new bullet hole! Quick, pass me the paint!”" I don't agree, but you can keep your point of view. For me, it is obvious that when we observe the function of an enzyme we are not painting anything: we just measure an activity in the lab, we see a reaction take place which would never take place if the enzyme were not there. What are we painting? Absolutely nothing. As I said, I respect your views, but don't agree. Regarding Szostak, some time ago I spent a lot of time to analyze in detail both the original paper and its follow-ups, but unfortunately I have not kept the link. I paste here a recent summary of my main argument:
It is not true that according to data there is an “uncertainty” in the quantification of foldung/functional sequences in random libraries. The simple truth is that Axe’s data (and those of some other, who used similar reverse methodology) are true, while the forward data are wring. Not because the data themselves are wrong, but because they are not what we are told they are. The most classical paper about this froward approach is the famous Szostak paper: Functional proteins from a random-sequence library http://www.nature.com/nature/j.....0715a0.pdf I have criticized that paper in detail here some time ago, so I will not repeat myself. The general idea is that the final protein, the one they studies and which has some folding and a strong binding to ATP, is not in the original random library of 6 * 10^12 random sequences of 80 AAs, but is derived through rounds of random mutation and intelligent selection for ATP binding from the original library, where only a few sequences with very weak ATP binding exist. Indeed, the title is smart enough: “Functional proteins from a random-sequence library” (emphasis added), and not “Functional proteins in a random-sequence library”. The final conclusion is ambiguous enough to serve the darwinian propaganda (which, as expected, has repeatedly exploited the paper for its purposes): “In conclusion, we suggest that functional proteins are sufficiently common in protein sequence space (roughly 1 in 10^11) that they may be discovered by entirely stochastic means, such as presumably operated when proteins were first used by living organisms. However, this frequency is still low enough to emphasize the magnitude of the problem faced by those attempting de novo protein design.” Emphasis mine. The statement in emphasis is definitely wrong: the authors “discovered” the (non functional) protein in their library by selecting weak affinity for ATP (which is not a function at all) and deriving from that a protein with strong affinity (which is a useless function, in no way selectable) by RV + Intelligent selection (for ATP binding). That’s why the bottom up studies like Szostak’s tell us nothing about the real frequency of truly functional, and especially naturally selectable proteins in a random library. That’s why they are no alternative to Axe’s data, and that’s why Hunt’s “argument” is simply wrong.
(The reference to Humt is because he takes the Szostak number as a higher threshold of the frequency of functional information in random sequences.) As I have explained in my cirticism of Lizzie's arguments about NS, the important point is that Intelligent Selection is not Natural Selection. For convenience, I paste here my criticism to Lizzie's argument:
The discussion about Elizabeth’s post, if I remember well, was “parallel”: I posted here and my interlocutors posted at TSZ. I have nothing against posting at TSZ (I have done that, or at least in similar places. more than once time ago). However, I decided some time ago to limit my activity to UD: it is already too exacting this way. However, my criticism to Lizzie’s argument is very simple: it is an example of intelligent selection applied to random variation. It is of the same type of the Weasel and of Szostak’s ATP binding protein. You see, I am already well convinced that RV + IS can generate dFSCI. It is the bottom up strategy to engineer things. So, I have no proble with Lizzie’s example, example for its title: “Creating CSI with NS” That is simply wrong. “Creating CSI with design by IS” would be perfectly fine. Your field seems to willfully ignore the difference between NS and IS. It is a huge difference. IS requires a conscious intelligent agent who recognizes some function as desirable, sets the context to develop it, can measure it at any desired level, and can intervene in the system to expand any result which shows any degree of the desired function. IOWs, both the definition of the function, the way to measure it, and the interventions to facilitate its emergence are carefully engineered. It’s design all the way. On the contrary, NS assumes that some new complex function arises in a system which is not aware of its meaning and possibilities, only because some intermediary steps represent a step to it, and through the selection of the intermediary steps because of one property alone: higher reproductive success. So, I ask a simple question: what reproductive success is present in Lizzie’s example? None at all. It’s the designer who selects what he wants to obtain. The property selected has no capability at all to be selected “on its own merits”. Therefore, Lizzie’s example has nothing to do with NS. I am certain of Lizzie’s good faith. I have great esteem for her. I am equally certain that she is confused about these themes.
Well, that's all, for the moment. gpuccio
#522 DNA_Jock
I see you opted for the buns.
Why did you write that? What did you mean by that statement you wrote? Dionisio
One very interesting thing about questions like the ones posted on #512 is that their correct answers lead to deeper questions, which eventually point to an elaborate information-processing system that’s a real delight for any passionate computer scientist or engineer. Do you understand this?
Yes. One of the cool things Robert M. Pirsig points out in “Zen and the Art…” is that the more tests you do, the more hypotheses increase in number. Makes science a lot of fun, if rather poorly remunerated. I see you opted for the buns. :) DNA_Jock
#517 DNA_Jock
I did once work on human actin genes, but that was over 30 years ago.
Definitely you have missed the best part of the ride so far. Are you aware of how much has been discovered in the last 30 years? But also, have you noticed how many new questions have been raised lately? Dionisio
#517 DNA_Jock
You’ll have to forgive me for not participating in that thread, as I was banned at the time.
Participating in what thread? What are you talking about? Did anyone ask you to participate in any thread? Can you explain what you meant by that statement you wrote? Did you understand what I wrote? Apparently you didn't. Please, don't tell me your reading comprehension is as poor as mine. :) Dionisio
#517 DNA_Jock
I used to be a molecular biologist.
Would a passionate molecular biology scientist ever leave such a fascinating profession? What could be more exciting than that? Can you elaborate on this? Thank you. Dionisio
#517 DNA_Jock Did you read post #516? Did you understand it? Do you agree? No? Why not? Dionisio
Dionisio -
Ok, let me see if I understand this: is “the Purpose” related to your work or profession?
Prior employment, yes.
Does that mean you’re a molecular biology scientist and you possess some breakthrough information but can’t reveal it here?
I used to be a molecular biologist. I would not characterize the information as “breakthrough”.
Obviously, my guessing was wrong, because a molecular biology scientist would be too busy working on serious research, hence no spare time to squander on the blogosphere.
Not necessarily. [beat] I could be arguing in my spare time. ;)
However, maybe there’s a possibility that you could help me to answer questions like these: [questions re cytokinesis] Would “the Purpose” allow you to answer questions like those, as long as you don’t reveal any confidential breakthrough information?
I did once work on human actin genes, but that was over 30 years ago. Your curiosity would be better served by someone a little bit more up to date – perhaps Dr. Bezanilla herself. And don’t worry, it’s only the technology described in Keefe & Szostak that’s an issue.
There are gazillion questions like those in over 550 posts in the “Third Way” thread in this same site.
You’ll have to forgive me for not participating in that thread, as I was banned at the time.
Did you read gpuccio’s post #501 carefully enough to comment on it so fast?
What can I say? I’m a quick study. :)
gpuccio I didn’t mean to interrupt your interesting discussion with DNA_Jock.
As far as I’m concerned, you are welcome to join in. Or throw buns. Your choice. :) DNA_Jock
#502 DNA_Jock One very interesting thing about questions like the ones posted on #512 is that their correct answers lead to deeper questions, which eventually point to an elaborate information-processing system that's a real delight for any passionate computer scientist or engineer. Do you understand this? :) Dionisio
gpuccio I didn't mean to interrupt your interesting discussion with DNA_Jock. :) Dionisio
#502 DNA_Jock Let's see: gpuccio posted @501 a very informative 85-line comment for you, time-stamped 10:05am About 36 minutes later your 10-line reply appeared: 502 DNA_Jock November 7, 2014 at 10:41 am Did you read gpuccio's post #501 carefully enough to comment on it so fast? This confirms what I wrote in post #507. :( Dionisio
Did I shutdown this discussion thread? Oops! Sorry. :( Dionisio
#508 DNA_Jock #511 PS However, maybe there's a possibility that you could help me to answer questions like these:
What makes myosin VIII to become available right when it’s required for cytokinesis? Same question for actin. What genes are they associated with? What signals trigger those genes to express those proteins for the cytokinesis? BTW, how does the transcription and translation processes for those two proteins look like? Are they straightforward or convoluted through some splicing and stuff like that? Are there chaperones involved in the post-translational 3D folding? Where is it delivered to? How does that delivery occur? How does the myosin pull the microtubule along an actin filament? How many of each of those proteins should get produced for that particular process? Any known problems in the cases of deficit or excess?
Would "the Purpose" allow you to answer questions like those, as long as you don't reveal any confidential breakthrough information? There are gazillion questions like those in over 550 posts in the "Third Way" thread in this same site. Dionisio
#508 DNA_Jock #510 correction Obviously, my guessing was wrong, because a molecular biology scientist would be too busy working on serious research, hence no spare time to squander on the blogosphere. :) Dionisio
#508 DNA_Jock Does that mean you're a molecular biology scientist and you possess some breakthrough information but can't reveal it here? Is that right? Sorry if I got it wrong again. Please, refer to post #507 to understand my condition. Dionisio
#508 DNA_Jock Ok, let me see if I understand this: is "the Purpose" related to your work or profession? Dionisio
Dionisio, Confidentiality agreements define "the Purpose", such as "to assess whether the parties wish to enter into a collaboration". Confidential information that is disclosed is only allowed to be used for "the Purpose". That is, no stealing someone else's ideas and using them to further your own program. Likewise, if you have confidential information about a company's finances, it is illegal to use that information to enhance your returns in the stock market... Hope this helps DNA_Jock
#506 DNA_Jock
Contractually, I am only permitted to use these data for “the Purpose”.
Please, note that my reading comprehension is poor, English is not my first language, my IQ score is about the same as my age (but it changes in the opposite direction), when someone says a joke at a weekend social gathering, it takes me until Monday or Tuesday to get it, after my wife patiently explains it to me. All that said, can you tell me in easy terms, what is “the Purpose”? Thank you for your compassion. :) Dionisio
Dionisio, I was hoping y'all would be able to read between the lines... Contractually, I am only permitted to use these data for "the Purpose". Sadly, "the Purpose" does not include proving some schmuck on an internet blog wrong. If I produced this excuse in the middle of an argument about the implications of Keefe & Szostak 2001, then I would fully sympathize with anyone who responded "Wow! That's the lamest excuse ever!" So I bring it up now, before gpuccio and I get into details. DNA_Jock
Oops! I meant 'my pay grade' Dionisio
gpuccio and KF Perhaps I have said this before, but I have no problem saying it again: If the Nobel prize included a category for 'patience' you two would have been nominated for it. I hope many lurkers are benefiting from reading what you are writing. However, it seems like some of your interlocutors aren't getting it. Some of the stuff you write is above my ay grade, but I enjoy reading it anyway. Thank you! :) Dionisio
#502 DNA_Jock
our conversation might reach a point where I am reduced to saying “I know for a fact that you are wrong about X”, but I will be unable to provide supporting data for my position
[...] “I know for a fact [...]”, but I will be unable to provide supporting data for my position
Say what? Please, can you explain what you meant by that? Thank you. :) Dionisio
No gpuccio, The wall was always there. The bullet-holes arose before humans existed (by three or more days). :) Biochemists arrive. They observe bullet holes, and paint circles around the bullet holes that they observe. They may debate how big a circle they should draw. That this is, in fact, the sequence of events is highlighted by your observation of the biochemist that discovers a new enzyme activity: "Look, a new bullet hole! Quick, pass me the paint!" I would be interested in hearing your complaints re Keefe and Szostak. One caveat with regard to this paper: our conversation might reach a point where I am reduced to saying "I know for a fact that you are wrong about X", but I will be unable to provide supporting data for my position. I know that this would be a deeply unsatisfying way to end the conversation, so I warn you ahead of time. DNA_Jock
DNA_Jock: I always appreciate your comments. First of all, the Szostak paper I refer to is not that one, but the one about finding functional sequences in random peptides. It has been discussed many times here, and that's why I assumed you knew what I was talking about. It's this one: https://molbio.mgh.harvard.edu/szostakweb/publications/Szostak_pdfs/Keefe_Szostak_Nature_01.pdf The Hazen paper you reference seems interesting, I will study it carefully. I am interested in any approach to the concept of functional information, except the denial of it. I think that Durston's approach is valuable. I have never said it is perfect. The problem here is to recognize functional information and try to measure it. There can be many ways to model it or measure it, but for the design inference we are interested in an estimate of its orders of magnitude, rather than an exact measure. I don't agree with you that mathematical constants are so different from machines, both biological and not. The mathematical constant coveys a meaning. A machine implements a function. One is descriptive information, the other prescriptive information. But both meaning and function are pure conscious dimensions: they can only be recognized by a conscious witness, and they don't exist as purely objective things. I believe that you still miss an important aspect of the problem. The important point is that the conveyance of meaning, or the implementation of function, are conscious related events which can be simulated by non conscious events, but only in simple form. In that sense, an English sonnet and a functional machine, either biological or not, or just a piece of working software, are better than a "simple" mathematical constant in binary form. Indeed, if we can still imagine that our unawareness of some algorithm which can explain the sequence of pi digits could in principle be due to our lack of imagination, that is practically impossible for a Shakespeare sonnet. Do you really believe that some algorithmic, non conscious process could output the sonnet I have quoted? Do you really believe it? The same is true for function. A watch cannot be the result of a blind watchmaker, if it is complex enough. A watch expresses the desire to measure time. No blind process really desires to measure time. Function is the expression of desire. From desire, through understanding of meaning, comes planning, and therefore design. All that happens in consciousness, and only in consciousness. Now, non conscious processes may look designed and functional. That is true. But only simple ones. I maintain that any definable function which need more than 500 specific bits to be implemented will always be found to come from a design process, from a conscious planning. I maintain my challenge to anyone to offer a false positive to my dFSCI procedure. I maintain that ATP synthase has at least 1600 bits of functional information, and that there is no possible algorithmic explanation for that. I have not even used the Durston method here, I have just counted the conserved AA positions from bacteria and archaea to humans. I am not giving a precise number, but a relaible order of magnitude. It's enough. ATP synthase expresses the need to store energy in biochemical form, starting from a proton gradient across a membrane. That's an engineering problem, a very refined one, and very refined is the solution. These are not painted targets. They are objective targets, existing on their own. ATP synthase has accomplished its task for 4 billion years, well before our existence as possible "painters". Going back to Texas shooters, I would like to try to clarify why I think that you have mixed two different problems in one argument. Bear with me. First of all, let's say that our problem is a scenario where a shooter who is not sharp at all is erroneously considered sharp. OK? Now, I say that there are two different situations where that can happen. I will call them TS fallacy and TS error. a) TS fallacy. Very simply, a not so sharp shooter shoots to a distant wall. Then we go to the wall, ans we paint a target for each shot, and we say that the shooter is sharp. This is a logical fallacy: the targets were not there before, no shooter has aimed at them. We painted the targets after the shooting. The targets have not independent reality. Their position is fixed by the previous positions of the shots. b) TS error. Very simply, a not so sharp shooter shoots to a distant wall. Then we go to the wall, and we see that the wall was filled with targets, and that some shots have hit targets because it was really likely to hit some, even without any aiming. So, if we say that the shooter has hit the targets because he is sharp, we are making an error. This is not a logical fallacy. It is simply a wrong inference. We attribute the hits to a good aim because we don't evaluate correctly the probability of random hits, which is high because there are a lot of targets. Mow, my point is that a) cannot be used against ID and the dFSCI procedure. Functional specification is not a post-hoc painting. It is a post-hoc description of what exists independently. We are not inventing the functionality of ATP synthase because we have discovered it in our lab: it has always been there. We are inventing nothing, not any more than Newton was inventing the law of gravity when he discovered it. The molecule has always been able to do what it does. The law of gravity has always been there. We only recognize the meaning in what we observe. That is a conscious process. b), instead, can be used against ID, but only if you can demonstrate that the search space is really filled with targets which can be useful to explain what we see. IOWs, neo darwinists must demonstrate one of two things: b1) That the search space if filled with long and complex functional molecules like ATP synthase. I am not holding my breath. b2) That the search space is filled with simpler functional molecules which can be expanded by positive NS, so that they can serve as "bridges" to loner and more complex functional molecules like ATP synthase, and that the path to ATP synthase and all the other long and complex functional molecules can be deconstructed, as a general rule, into those naturally selectable steps, each of them in the range of a reasonable RV system. OK, demonstrate that. So, my point is that b (the TS error) is a valid argument, but is simply wrong: the search space is not filled with complex molecules, and complex molecule cannot be deconstructed into simpler, naturally selectable ones. Not as a rule, and not even as a single example. However, if that is your line, we can discuss. But a) is simply a wrong argument. It is completely false. gpuccio
kf, Sadly, you provided no answers to my questions, viz: Are quite comfortable with Durston’s assumption that the exploration of insulin’s aa sequence has been a random walk, without any intervention? Every single one of your calculations of p(T|H) and related alphabet soup rely on the assumption of independence (as you have admitted) and that this assumption is false (as both you and Durston have admitted). You assert that the error is “not material”. How big is the error? How do you know? Your post 497 was relatively concise, but non-responsive. Please be as precise and concise as you can while answering these questions. DNA_Jock
What do they do for insulin in parts of the world that have probs with pigs?
Recent advances in genetic engineering have led to the sale of recombinant human insulin. Since 1982, at least. DNA_Jock
PS: What do they do for insulin in parts of the world that have probs with pigs? kairosfocus
D-J: OOL is the heart of the matter, you were implying OOL as a one shot event. Sims that are reasonable do explore a stochastic pattern, but the problem is too many evo computing or genetic algorithm cases end up bringing in intelligently directed configuration by the back door or substitute hill climbing within a well behaved island of function for the real task of finding such islands. A grossly simplified, off track sim taken on "Pascal washes whiter" or the like is worse than no sim. I further suggest that Durston's summary shows the sort of range that is reasonable, compatible with retaining function on something important. Though of course pig insulin does not quite do the same as human. The fundamental thing is, we do not have a situation where for the typical 300 AA protein, say, any of 17 or so AAs of the 20 would do for any position. The constraints observed consistent with retained function are much tighter than that. As for the game on probabilities, the answer has long been, take the Dembski 2005 calc one step forward to see what it means. Immediately, it is an info beyond a threshold metric, and we have independent ways to get info values and to estimate thresholds of feasible complexity on any reasonable blind chance process. Where search of a set of possibilities W is selection from the set of subsets of cardinality 2^W. That is, the blind search for a golden search is strongly likely to be much worse than any reasonable blind chance search of the first order config space W, in which islands of function T1 . . . Tn are. Where per atomic resources the search cannot but be maximally sparse -- a needle in haystack challenge. As for assumptions of independence, you are not thinking physically. Any AA or any base can succeed any other and there is no mechanical constraint. For DNA the sequence used is tied through a translation process to fold-collaborate-function requisites that are remote, and come much after the fact. A change that happens somehow in a protein is not back-translated into a new DNA sequence; 2 bits per base is reasonable. Moving to looking at coded patterns is like looking at frequency patterns in English text and suggesting that that somehow constrains the bit positions in a memory register. The more complex metrics are on we are already in the genetic code and are already at a family of related proteins across the world of life. While that has its place and is perhaps more palatable to Darwinists (though even that is doubtful) we must not get the causal chain cart before the horse. And that is already leaving aside the Darwin's pond context of energetically uphill rxns, chirality and cross-interference. And there is more. I need to get back to the local issues already in progress and hotting up today with an announcement of Speaker. Then, there was the speech on history and the value of cultural, historical and natural heritage, yesterday afternoon and its aftermath. KF kairosfocus
kf, We were not talking about OOL. As Pa Grape said “Why even bring it up?” You state:
When it comes to Monte Carlo runs, it seems to me that we do not invalidate such because the sims are programmed and set up then run by presumably intelligent programmers, once the dynamics are reasonable and appropriate randomness [or often pseudo- . . . ] is injected.
Personally, I accept that well-designed Monte Carlo’s are useful for modeling unintelligent processes. However, many IDists (including, to my disappointment, gpuccio) reject simulations for precisely the reasons you state. You should let them know they are wrong. On the other hand, if the programmer steps in during the run and replaces certain of the pseudo-random numbers with numbers designed to achieve a specific goal, we might question the validity of the analysis. Especially if n=1. We might say “Get you b|**&/y thumb off the scale!” Regarding the fact that n=1, you state:
“there have been many types of organisms and a lot more individuals, allowing chance driven random walks around the AA-sequence space”
So you are quite comfortable with Durston’s assumption that the exploration of insulin’s aa sequence has been a random walk, without any intervention? Careful, it’s a trap. But nowhere, nowhere do you even attempt to address the fact that every single one of your calculations of p(T|H) and related alphabet soup rely on the assumption of independence (as you have admitted) and that this assumption is false (as both you and Durston have admitted). You assert that the error is “not material”. How big is the error? How do you know? Please be as precise and concise as you can. DNA_Jock
To further expand on the limited utility of Durston's results: He is taking the observed variation in the aa sequences of extant, optimized sequences and, assuming neutral variation, he then estimates the degree of substitution allowed. Thus what he measures is the degree of substitution allowed without any selectable degradation of function. This is analogous to Gibbs sampling. However, any sequences that have a slight degradation of function will be underestimated in his sample. Any sequence with a moderate degradation of function will be vanishingly unlikely to appear in his dataset. Hence the disconnect between his results and the more systematic approach of McLauglin 2012 (PMID:23041932). Both Durston and McLauglin suffer from the shortcoming that they are exploring functional constraint around an optimum. Therefore the approach of Hazen et al. 2007 (PMID:17494745) is preferable. Is this paper the "wrong Szostak paper" to which you refer? There are 220 Szostak papers in PubMed... You are going to have to explain to me why you think it is "wrong". And "not liking the results" is not a valid reason. Cognitive bias, indeed! Finding binaries on an alien planet: I like your hypothetical; it made me think. Before I address the Texas SS aspect, a word of caution re the design inference aspect: if I found pi and e in binary I would be drawn to conclude design because, as you so rightly put it "with all [y]our knowledge of physical laws in the universe, there is no natural process which can explain those specific sequences". If OTOH the binaries represented the Fibonacci sequence, I would continue looking for natural processes to explain the sequence. Thus my conclusion of design for pi and e may just be a failure of my imagination or the result of my ignorance. Wouldn't be the first time. For the following, I would like you to imagine that, like the Fibonacci sequence, there is some natural process that can produce square roots of smallish numbers in binary. I agree with your point that pi and e exist outside of any specification we might draw up, they existed before we saw these marks on the wall, so pi in binary and e in binary avoid the Texas SS issue. Unfortunately for your argument, there is a flaw in your analogy. Imagine that instead we had found (to 10^6 bits a piece) the square root of 3,001 and the square root of 500,001. This is what we are doing when we specify a functionality for a protein that we have already found. The specification "ATP synthase" did not exist before we found ATP in the same way that pi existed. The specification "ATP synthase" is like the specification "the square root of 3,001 in binary" and the specification "APP synthase is like "the square root of 500,001 in binary". The only reason were are not discussing the latter is because we haven't seen it yet. DNA_Jock
DNA_Jock: About Durston again, obviously there are assumptions in his method. I don't agree with you that they are "terrible, completely unsupported". I find them very reasonable. The point is, he is measuring something. He measures differences in the functional density between proteins. So, while his values are obviously related to the total protein length, as shown in the regression, the variance in functional density is reasonably explained by different functional constraints for different functions. So, his results tell us two very important things: that functional complexity always increases with the length of the sequences, but that it also depends critically on the type of function. I am confident that the computation of functional complexity in proteins will become even more precise and reliable. But Durston has given us a very good approach. I have given one important reason to think that Durston's method really underestimates functional complexity, here: https://uncommondesc.wpengine.com/junk-dna/junk-dna-only-20-all-but-8-2/ Posts #5, 11 and 12. gpuccio
DNA_Jock: Just to understand your position about the TS fallacy, I would like to ask you an explicit answer to the following hypothetical, extreme scenario: You land on a new planet, of which you know very little. Apparently, there are no living inhabitants. On the mountain walls, you can observe two kinds of marks, sometimes arranged linearly. Both marks can be explained by some environmental process in the planet, and even their linear arrangement can be explained rather easily. In general, the sequences of the two marks appear aspecific, as you would expect. But you arrive at a particular wall, where there are only two long linear sequences of marks. You are a good mathematician, and after a while something in the two sequences disturbs you a little. After some further reflection, you notice that the two marks, if interpreted as binary symbols, correspond in the first sequence to the first 10^6 bits of pi, and in the second sequence to the first 10^6 bits of e. After a long consideration, you conclude that with all your knowledge of physical laws in the universe, there is no natural process which can explain those specific sequences. They could be random outcomes, like all the other similar apparently random sequences you observed on the planet. Or they could be designed by some alien visitor or old inhabitant, of which you have no other trace. So, my question, and please answer it explicitly. Is there a problem of design inference, or is your recognition of the correspondence of the two sequences to two important mathematical constants only an example of Texas Sharpshooter fallacy? Just to understand your position. gpuccio
kairosfocus:
LH: I gave an outline of an absolutely routine way to measure functionally specific info, with a guide as to how it would address the flagellum.
KF, Learned Hand asked for an "explicit calculation", not an "outline". You say the calculation is "absolutely routine". Then perform it! Surely you are competent enough to perform an "absolutely routine" calculation, aren't you? Show us your explicit and "absolutely routine" calculation of how much "functionally specific info" is contained in the bacterial flagellum. P.S. Dembski's CSI argument is circular, as I explained above. If you disagree, you need to show where my argument fails, rather than tossing out distractive red herring talking points designed to polarise and confuse the atmosphere. Please do better. keith s
DNA_Jock: You say: "I should expand on why Durston’s assumption is invalid: He is ignoring the effect of purifying selection." I will comment on your brief comments about Durston later, but for the moment I would like to understand what you mean here. Negative selection (purifying selection) is exactly the reason why function constrains sequence. IOWs, we observe different possible functional sequences with the same function because neutral selection allows them, while negative selection eliminates all the rest of variation. That's how proteins traverse their functional space. In what sense is that an argument against Durston? Please, explain. gpuccio
DNA_Jock: "This IS the Texas Sharpshooter fallacy. He characterizes the activity, THEN writes the specification. By way of illustration, you have specified ATP synthase. You have never specified adenosine pentaphosphate synthase. Why? Because you have never observed it. You and your biochemist are saying “Look at this protein; how unlikely is that?” It’s a post-hoc specification." I that all you can say? Have you understood my point? The specification is made post-hoc, in the sense that the description of the function is given after we observe it, but the function is not post-hoc: the function exists independently. I think that you are strangely mixing two different problems. One is that the definition of the function of a protein is done from the observation of the protein. As I have said, this is post-hoc only in a chronological sense, not in a logic sense: we are not imagining the functionality because we see it. We realize that the functionality exists because we see it. There is an absolute objectivity in the ability of an enzyme to accelerate a reaction, like there is an absolute objectivity in the ability of a text to covey meaning. These things are not "painted" post hoc. So, your interpretation of the need to explain them as a fallacy is really a fallacy. The second aspect is what I call "the problem of all possible functions". That is the problem I have discussed in my post which I had referred you to. It is also the main line of "defense" which is used by neo darwinists to criticize ID. Very simply, it is the attempt to show that the functional space is so filled with function that it is extremely easy to find function by RV. That is the purpose of the wrong Szostak paper. That is the purpose of the few similar papers which try, without succeeding, to make the point, and that is probably the purpose of the Wagner "arguments". Nothing of those attempts, as far as I know, even goes near to starting to show what it tries to show. That the functional space is not filled with function in general is well shown by the simple fact that a long enough string of text with good meaning in English can invariably be distinguished by any random or algorithmic string of text by dFSCI. This is a fact, and I have many times challenged anyone to give any counterexample. How do you explain that? Isn't that the Texas Sharpshooter fallacy? Isn't the English meaning of a text, according to your argument, only a target painted around the string? How can you use such wrong arguments only to deny the value of functional information? This is a very serious fallacy: this is cognitive bias of the worst species. My explicit point in defining dFSCI is that any partition which generates a target space whose probability is extremely small and which cannot be explained by necessity can be empirically used to infer design with 100% specificity. This is empirically true. How could it be empirically true, if it were only a false idea due to a logic fallacy? The argument that the functional space of proteins, in particular, could be so connected that a simple algorithm like NS can explain it has nothing to do with the Sharpshooter fallacy: it is an attempt to explain functional information algorithmically. That algorithmic explanation must be supported by facts to be accepted as credible. Her is what I have written about tha "any possible function" reasoning.
I usually call this objection the “any possible function” argument. In brief, it says that it is wrong to compute the probability of a specific function (which is what dFSCI does, because dFSCI is specific for a defined function), when a lot of other functional gens could arise. IOWs, the true subset of which we should compute the probability is the subset of all functional genes, which is much more difficult to define. You add the further argument that the same gen can have many functions. That would complicates the computation even more, because, as I have said many times, dFSCI is computed for a specific function, explicitly defined, and not for all the possible functions of the observed object (the gene). I don’t agree that these objections, however reasonable, are relevant. For many reasons, that I will try to explain here. a) First of all, we must remember that the concept of dFSCI, before we apply it to biology, comes out as a tool to detect human design. Well, as I have tried to explain, dFSCI is defined for a specific function, not for all possible functions, and not for the object. IOWs, it is the complexity linked to the explicitly defined function. And yet, it can detect human design with 100% specificity. So, when we apply it to biological context, we can reasonably expect a similar behaviour and specificity. This is the empirical observation. But why does that happen? Why doesn’t dFSCI fail miserably in detecting human design? Why doesn’t it give a lot of false positives, if the existence of so many possible functions in general, and of so many possible functions fro the same object, should be considered a potential hindrance to its specificity? The explanation is simple, and it is similar to the reason why the second law of thermodinamics works. The simple fact is, if the ration between specified states and non specified states is really low, no specified state will ever be observed. Indeed, no ordered state is ever observed in the molecules of a gas even if there are potentially a lot of ordered states. The subset of ordered states is however trivial if compared to the subset of non ordered states. That’s exactly the reason why dFSCI, if we use an appropriate threshold of complexity, can detect human design with 100% specificity. The number of functionally specified states are simply too rare, is the total search space is big enough. I will give an example with language. If we take one of Shakespeare’s sonnets, we are absolutely confident that it was designed, even if after all it is not a very long composition, and even if we don’t make the necessary computations of its dFSCI. And yet, we could reason that there are a lot of sequences of characters of the same length which have meaning in english, and would be specified just the same. And we could reason that there are certainly a lot of other sequences of characters of the same length which have meaning in other known languages. And certainly a lot of sequences of characters of the same length which have meaning in possible languages that we don’t know. And that the same sequence, in principle, could have different meanings in other unknown languages, on other planets, and so on. Does any of those reasonings lower our empirical certainty that the sonnet was designed? Not at all. Why? Because it is simply too unlikely that such a specific sequence of characters, with such a specific, and beautiful meaning in English, could arise in a random system, even if given a lot of probabilistic resources. And how big is the search space here? My favourite one, n. 76, is 582 characters long, including spaces. Considering an alphabet of about 30 characters, the search space, if I am not wrong, should be of 2800 bits. And this is the search space, not the dFSCI. If we define the function as “any sequence which has good meaning in English”, the dFSCI is certainly much lower. As I have argued, the minimal dFSCI of ATP synthase alpha+beta subunit is about 1600 bits. Its search space if about 4500 bits, much higher than the Shakespeare sonnet’s search space. So, why should we doubt that ATPsyntase alpha+beta subunit was designed? For lack of time, I will discuss the other reasons against this argument, and the other arguments, in the following posts. By the way, here is Shakespeare’s sonnet n. 76, for the enjoyment of all! Why is my verse so barren of new pride, So far from variation or quick change? Why with the time do I not glance aside To new-found methods, and to compounds strange? Why write I still all one, ever the same, And keep invention in a noted weed, That every word doth almost tell my name, Showing their birth, and where they did proceed? O! know sweet love I always write of you, And you and love are still my argument; So all my best is dressing old words new, Spending again what is already spent: For as the sun is daily new and old, So is my love still telling what is told.
And:
Some further thoughts on the argument of “any possible function”, continuing from my previous post. b) Another big problem is that the “any possible function” argument is not really true. Even if we want to reason in that sense (which, as explained in my point a, is not really warranted), we should at most consider “any possible function which is really useful in the specific context in which it arises”. And the important point is, the more a context is complex, the more difficult it is to integrate a new function in it, unless it is very complex. In a sense, for example, it is very unlikely that a single protein, even if it has a basic biochemical function, may be really useful in a biological context unless it is integrated in what already exists. That integration usually requires a lot of additional information: transciptional, post transcriptional and post translational regulation, transport and localization in the correct cellular context and, usually, coordination with other proteins or structures. IOWs, in most cases we would have an additional problem of irreducible complexity, which should be added to the basic complexity of the molecule. Moreover, in a beings which is already efficient (think of prokaryotes, practically the most efficient reproductors in the whole history of our planet), it is not likely at all that a single new biochemical function can really help the cell. That brings us to the following point: c) Even the subset of useful new functions in the context is probably too big. Indeed, as we will discuss better later, if the neo darwinian model were true, the only functions which are truly useful would be those whihc can confer a detectable reproductive advantage. IOWs, those which are “visible” to NS. Even if we do not consider, for the moment, the hypothetical role of naturally selectable intermediates (we will do that later), still a new single functional protein which is useful, but does not confer a detectable reproductive advantage would very likely be lost, because it could not be expanded by positive selection (be fixed in the population) nor be conserved by negative selection. So, even if we reason about “any possible function”, that should become “any possible function which can be so useful in the specific cellular context in which it arises, that it can confer a detectable, naturally selectable reproductive advantage. That is certainly a much smaller subset than “any possible function”. Are you sure that 2^50 is still a reasonable guess? After all we have got only about 2000 basic protein superfamilies in the course of natural history. Do you think that we have only “scratched the surface” of the space of possible useful protein configurations in our biological context? And how do you explain that about half of those superfamilies were already present in LUCA, and that the rate of appearance of new superfamilies has definitely slowed down with time? d) Finally, your observation about the “many different ways that a gene might perform any of these functions”. You give the example of different types of flagella. But flagella are complex structures made of many different parts, and again a very strong problem of irreducible complexity applies. Moreover, as I have said, I have never tried to compute dFSCI for such complex structures (OK, I have given the example of the alpha-beta part of ATP synthase, but that is really a single structure that is part of a single multi-chain protein). That’s the reason why I compute dFSCI preferably for single proteins, with a clear biochemical function. If an enzyme is conserved, we can assume that the specific sequence is necessary for the enzymatic reaction, and not for other things. And, in general, that biochemical reaction will be performed only by that structure in the proteome (with some exceptions). The synthesis of ATP from a proton gradient is accomplished by ATP synthase. That is very different from saying, for example, that flight can be accomplished by many different types of wings.
And:
My aim is not to say that all proteins are designed. My aim is to make a design inference for some (indeed, many) proteins. I have already said that I consider differentiation of individual proteins inside a superfamily/family as a “borderline” issue. It has no priority. The priority is, definitely, to explain how new sequences emerge. That’s why I consider superfamilies. Proteins from different superfamilies are completely unrelated at sequence level. Therefore, your argument is indeed in favor of my reasoning. As I have said many times, assuming an uniform distribution is reasonable, but is indeed optimistic in favor of the neo darwinian model. There is no doubt that related or partially related states have higher probability of being reached in a random walk. Therefore, their probability is higher that 1/N. That also means, obviously, that the probability of reaching an unrelated state is certainly lower than 1/N, which is the probability of each state in a uniform distribution. For considerations similar to some that I have already done (the number of related states is certainly much smaller than the number of unrelated states), I don’t believe that the difference is significant. However, 1/N is a higher threshold for the probability of reaching an unrelated state, which is what the dFSCI of a protein family or superfamily is measuring.
And:
dFSCI is a tool which works perfectly even if it is defined for a specific function. The number of really useful functions, that can be naturally selected in a specific cellular context, is certainly smnall enough that it can be overlooked. Indeed, as we are speaking of logarithmic values, even if we considered the only empirical number that we have: 2000 protein superfamilies that have a definite role in all biological life as we know it today, that is only 11 bits. How can you think that it matters, when we are computing dFSCI in the order of 150 to thousands of bits? Moreover, even if we consider the probabiliti of finding one of the 2000 superfamilies in one attempt, the mean functional complexity in the 35 families studied by Durston is 543 bits. How do you think that 11 bits more or less would count? And there is another important point which is often overlooked. 543 bits (mean complexity) means that we have 1:2^543 probabilities to find one superfamily in one attempt, which is already well beyond my cutoff of 150 bits, and also beyond Dembski’s UPB of 520 bits. But the problem is, biological beings have not found one protein superfamily once. They have found 2000 independent protein superfamilies, each with a mean probability of being found of 1:2^543. Do you want to use the binomial distribution to compute the probability of having 2000 successes of that kind? Now, some of the simplest families could have been found, perhaps. The lowest value of complexity in Durston’s table is 46 bits (about 10 AAs). It is below my threshold of 150 bits, so I would not infer design for that family (Ankyrin). However, 10 AAs are certainly above the empirical thresholds suggested by Behe and Axe, from different considerations. But what about Paramyx RNA Polymerase (1886 bits), or Flu PB2 (2416 bits), or Usher (1296 bits)? If your reason of “aggregating” all useful functional proteins worked, we should at most find a few examples of the simplest ones, which are much more likely to be found, and not hundreds of complex ones, which is what we observe.
So, I will sum up my arguments with a simple question to you: How is it that the perfect ability to distinguish a piece of text in good English 600 characters long form any randomly generated string of characters is empirically valid? Why, for example, the common objection made here many times, that a random system could generate strings in any language, not only in English, or encrypted strings, does not make our ability to recognize English strings from random string any less valid? gpuccio
KS: Kindly cf the just above, as a FYI. KF kairosfocus
LH: I gave an outline of an absolutely routine way to measure functionally specific info, with a guide as to how it would address the flagellum. There is nothing mysterious or difficult there providing you have basic familiarity with how info is routinely measured and reported in say file sizes, preferably in bits. I see no good reason to make an imaginary hyperskeptical mountain out of a mole hill, when a world of technology out there routinely does what I said, just look at a folder window on your PC, in details mode if you doubt me -- it's that commonplace. There is no in principle difference between ASCII strings, binary digit -- bit -- strings, and R/DNA strings, with protein AA strings just expressing the implicit coded in functionality in the R/DNA strings. If you need a 101, kindly go here on in context, in my always linked, to see the basic reasoning behind info measurement. If you want basic logic behind a metric of FSCO/I as a beyond threshold concept that effectively gives the per aspect explanatory filter in an equation, try here. Of course that simple expression: Chi_500 = I*S - 500, functionally specific bits beyond the solar system threshold . . . is rather like the notorious summary table in a report document: a small amount of table can take a lot of work on the ground to properly fill in. KF PS: And, KS, that metric is not circular. As for Dembski's metric MODEL of CSI, kindly nore specified complexity was first observed and stated on the record by Orgel in 1973, 32 years before Dembski developed his model. Which turns out to be an info beyond a threshold metric, the above expression gives a boil 'er down form. CSI is observable and objectively recognisable as a target zone that is separately independently specifiable and deeply isolated to 1 in 10^150 or a similar scale of a config space. Once such has been specified, it is maximally unlikely that blind explorations traceable to chance and mechanical necessity will find it. But, especially in the relevant case where specification is based on observable function, there are trillions of cases in point that show the reliable pattern that FSCO/I and wider CSI come about by design. Thus such is a reliable sign of design. Per inductive inference to best, observationally anchored explanation backed up by needle in haystack search challenge analysis. Induction on trillions of cases in point with no clear counter-instances is NOT question begging. Which, should be patent. Save to the selectively hyperskeptical. kairosfocus
D-J: n = 1 is a strawman, as you know. What you substituted is, there is one world of life, when what is relevant is that for protein families there have been many types of organisms and a lot more individuals, allowing chance driven random walks around the AA-sequence space. Relevant proteins have some variability but not indefinite plasticity. That is, we do see island of function patterns. When it comes to Monte Carlo runs, it seems to me that we do not invalidate such because the sims are programmed and set up then run by presumably intelligent programmers, once the dynamics are reasonable and appropriate randomness [or often pseudo- . . . ] is injected. Such are capable of exploring a space of stochastic possibilities and displaying the pattern of likely enough outcomes to show up in a set of runs. Which is the point. Utterly remote possibilities do not usually show up in such searches of a space of possibilities. Which is the further point. Next, the relevant issue within living forms is origin of novel body plans by blind chance and/or mechanical necessity from some ancestral form, credibly involving -- you can calc on back of envelope or look at genome sizes etc -- 10 - 100+ mn new bases. the possibilities space involved, multiplied by the known isolation of clusters of protiens, further multiplied by the challenge that to get new cell types, tissues, organs and integrated systems requires creating multiple, well-matched correctly arranged parts that interact to achieve relevant config-specific function, point to a sharp limitation of ability to explore spaces of relevant space in ways that would make blind discovery of novel islands of function plausible. On either solar system or observed cosmos scope resources. That is the context in which the simple, easily verified pattern that functionally specific, complex organisation and linked information [FSCO/I for short] is observable, with trillions of cases we have seen arise. In every observed case, with reliability, the cause involves intelligently directed configuration, aka design. Next, you managed to point to a key aspect of the point of islands of function without recognising it: purifying selection is the selective removal of alleles [= one of a number of alternative forms of the same gene or same genetic locus . . . ] that are deleterious. This can result in stabilizing selection through the purging of deleterious variations that arise. Yes, some muts are directly lethal from early embryological stages, others later on. Yet others result in inability to compete with normal forms and --save for the sort of artificial intervention such as with so-called fancy goldfish -- would die out, stabilising the general pop pattern -- Blythe's emphasis on what natural selection, so-called would do. Under other certain circumstances -- I have in mind caves in Mexico -- normal function in the form of eyes is disadvantageous and loss of eyes in fish resulted. Note, loss of function. In every one of these and other cases, natural selection serves as a subtracter of information, a culler not a creative adder. That addition comes from somewhere else, per evo mat assumptions, from a non-foresighted, blind, happenstance process, aka chance variation. By any number of possible mechanisms. Thus we run right into the search limitations of such blind processes. But then we do need to go back to the n = 1 issue. On evo mat abiogenesis models, we are looking at Darwin's warm ponds, comet bodies, oceans, gas giant moons and the like, across the Sol system and the wider cosmos. We have known thermodynamics forces, known chemistry, known multitudes of venues, a known reasonable upper state-change rate for atomic level processes of about 10^-13 - 10^-15 s. That does not boil down to n = 1 for sim runs. Nor, does it substantiate the question-begging assertion or assumption that there was a lucky breakthrough of blind forces that dis create gated, encapsulated, code-using, metabolising, von Neumann self replicator using, cell based life. Just the opposite. Forces of diffusion and breakdown of energetically uphill molecules and competing cross-reactions alone point strongly against abiogenesis. That's why Orgel and Shapiro had the following sharp exchange some years ago, on the utter implausibility of metabolism and genes first models -- in a context where the common factor is origin of requisite FSCO/I -- that resulted in mutual ruin:
[[Shapiro:] RNA's building blocks, nucleotides contain a sugar, a phosphate and one of four nitrogen-containing bases as sub-subunits. Thus, each RNA nucleotide contains 9 or 10 carbon atoms, numerous nitrogen and oxygen atoms and the phosphate group, all connected in a precise three-dimensional pattern . . . . [[S]ome writers have presumed that all of life's building could be formed with ease in Miller-type experiments and were present in meteorites and other extraterrestrial bodies. This is not the case. A careful examination of the results of the analysis of several meteorites led the scientists who conducted the work to a different conclusion: inanimate nature has a bias toward the formation of molecules made of fewer rather than greater numbers of carbon atoms, and thus shows no partiality in favor of creating the building blocks of our kind of life . . . . To rescue the RNA-first concept from this otherwise lethal defect, its advocates have created a discipline called prebiotic synthesis. They have attempted to show that RNA and its components can be prepared in their laboratories in a sequence of carefully controlled reactions, normally carried out in water at temperatures observed on Earth . . . . Unfortunately, neither chemists nor laboratories were present on the early Earth to produce RNA . . . [[Orgel:] If complex cycles analogous to metabolic cycles could have operated on the primitive Earth, before the appearance of enzymes or other informational polymers, many of the obstacles to the construction of a plausible scenario for the origin of life would disappear . . . . It must be recognized that assessment of the feasibility of any particular proposed prebiotic cycle must depend on arguments about chemical plausibility, rather than on a decision about logical possibility . . . few would believe that any assembly of minerals on the primitive Earth is likely to have promoted these syntheses in significant yield . . . . Why should one believe that an ensemble of minerals that are capable of catalyzing each of the many steps of [[for instance] the reverse citric acid cycle was present anywhere on the primitive Earth [[8], or that the cycle mysteriously organized itself topographically on a metal sulfide surface [[6]? . . . Theories of the origin of life based on metabolic cycles cannot be justified by the inadequacy of competing theories: they must stand on their own . . . . The prebiotic syntheses that have been investigated experimentally almost always lead to the formation of complex mixtures. Proposed polymer replication schemes are unlikely to succeed except with reasonably pure input monomers. No solution of the origin-of-life problem will be possible until the gap between the two kinds of chemistry is closed. Simplification of product mixtures through the self-organization of organic reaction sequences, whether cyclic or not, would help enormously, as would the discovery of very simple replicating polymers. However, solutions offered by supporters of geneticist or metabolist scenarios that are dependent on “if pigs could fly” hypothetical chemistry are unlikely to help.
In short, there is no justification for n = 1. From the plausibility perspective we have no good (non ideological, question begging) reason to hold that blind watchmaker abiogenesis occurred even once. From the opportunity perspective we have had endless numbers of opportunities across an observable cosmos, so number of trial runs is very large. Of course, to make my search the config space point I have set up a toy case of 10^57 sol system atoms each observing a tray of 500 coins flipped and examined every 10^-14 s. That is the extreme for our sol system, and parallels a similar case with 10^80 atomic observers for the observed cosmos. The result is, the degree of exploration of even such a toy space of possibilities using up sol system resources is so tiny that we have no good reason to expect blind discovery of anything but the bulk of the possibilities, near 50:50 H-T, in no particular order. The point of the sim parable is obvious, save to thos4 committed not to see it. and in that case 10^57 runs for 10^17 s, is not exactly n = 1. The overall point is plain. There is no good reason to believe that either OOL or origin of major body plans occurred by blind watchmaker thesis type mechanisms. And, every reason to see that known stochastic processes would cause exploration of AA space, culled for function, among protein families. But of course survival of a family of proteins across life forms or exploration of the resulting islands of function across time are not the real issue, it is blind watchmaker mechanism arrival at the many molecular level islands implicit in viable body plans. Dozens of them. But, in the end, the bottomline for UD is simple: for two years running, there has been an open invitation challenge to provide an essay in support of the blind watchmaker type thesis for the tree of life, from root to twigs. Over two years, a year past I had to cobble together a composite answer that was most unsatisfactory and since there has been no interest in making the case. I see plenty of interest in attacking and discrediting design thought and even outright enthusiasm to attack or ridicule design supporters, but very little sign of interest in actually making the blind watchmaker case. When, in fact, a readable 6,000 word or so essay that can use multimedia, infographics, and onward links etc to heart's content would be immediately devastating to design theory. Methinks the dog that refused to bark is the most telling argument of all on the real balance of the matter on the merits. If you doubt me, simply go here. I'se be waiting for the woof-woof . . . KF kairosfocus
Mung: Then perhaps wd400 can explain just how magical and mysterious and yes, miraculous, it is that such diversity of life can come about through exploration of only a tiny and closely related area of the space of all possible genomes. We're waiting. Mung
keiths:
The most laughable thing about Dembski’s CSI is that even if it were measurable, it would still be useless.
lol. pathetic. really pathetic. laughable. poor keiths. Mung
DNA_Jock:
Obviously, this problem is made worse by people who start new threads because they feel like it, or who post supposedly seminal, comments-closed pontifications.
:-) keith s
I should expand on why Durston's assumption is invalid: He is ignoring the effect of purifying selection. DNA_Jock
Silver Asiatic,
You appear to be quoting from Dembski. Could you refer me to the pages that contain that summary?
It's on p. 18 of Specification: The Pattern That Signifies Intelligence. keith s
Gpuccio:
For example, a biochemist can study a system and find some new enzymatic activity. Let’s say that he isolates the protein, and verifies that it is really the responsible of the enzymatic activity. So he defines the activity, and says that the protein is functional, and can do that particular thing.
This IS the Texas Sharpshooter fallacy. He characterizes the activity, THEN writes the specification. By way of illustration, you have specified ATP synthase. You have never specified adenosine pentaphosphate synthase. Why? Because you have never observed it. You and your biochemist are saying “Look at this protein; how unlikely is that?” It’s a post-hoc specification.
I don’t think so. The main assumption in Durston’s method is that the functional space has been mostly traversed during evolution by neutral variation. IOWs, that the variety of sequences we observe for a function is a reliable sample of the target space.
A representative sample of the entire target space? That is a terrible, completely unsupported assumption. Also note that it isn’t relevant to kairosfocus’s problem, which is that “each site in an amino acid protein sequence is assumed to be independent”, of which Durston himself says “In reality, we know that this is not the case”. If you are going to make an approximation, you have to be able to support the claim that it is fit-for-purpose.
Other methods will be developed, as our understanding of the functional space of protein improves. The point is: functional complexity is a true and important dimension, it can be analyzed, and it is extremely relevant for the problem of the origin of biological information. Denying this is denying science itself.
On this we agree. But color me underwhelmed with the efforts to date of Durston and Axe etc. DNA_Jock
DNA_Jock: "Finally, I don’t see the probabilities that arise as being of much practical use, since Durston analyzed the observed sequence variation in extant, optimized sequences. This tells you very, very little about the size of the target space, and (because of the way he did the analysis) absolutely nothing about the existence of correlations between positions." I don't think so. The main assumption in Durston's method is that the functional space has been mostly traversed during evolution by neutral variation. IOWs, that the variety of sequences we observe for a function is a reliable sample of the target space. As Durston attributes a probability of change to each position given the functional restraint, it is extremely likely that variation which use correlations between different positions are included in the sample. After all, he analyzes very old molecules, and those molecules have been the target of a lot of neutral variation in the course of evolution. Would any neo darwinist believe that the same principle which is supposed to generate all new functional proteins has not been able to test most or all functional sequences of a same function, with the help of negative selection? Obviously, Durston's method is an approximation. I have also given here an argument about why it should in general underestimate functional complexity. But it is measuring functional complexity. Maybe the measure is not precise. Maybe it is biased. But it is the simplest method we have at present. Other methods will be developed, as our understanding of the functional space of protein improves. The point is: functional complexity is a true and important dimension, it can be analyzed, and it is extremely relevant for the problem of the origin of biological information. Denying this is denying science itself. gpuccio
DNA_Jock: "It’s your choice, but I honestly believe you would be happier if you moved." No. But thank you for believing it, I consider it as an expression of affection. To be fair, I will give it back to you: I honestly believe that, if you really looked at the ID arguments without any bias, you could seriously consider to move. I don't know if you would be happier or not (although I suspect you would), but your intellectual honesty would certainly prompt, or at least tempt you to do that. :) gpuccio
DNA_Jock at #474: n my cited comments I was essentially answering the objection that computing dFSCI should take into account all possible functions. If your problem is about post-specification, here is my view, which is very simple. It is completely false that functional post-specification is an example of the Texas sharpshooter fallacy. Here is the reason. First of all, I take from Wikipedia a very simple description of the essential fallacy:
The name comes from a joke about a Texan who fires some gunshots at the side of a barn, then paints a target centered on the biggest cluster of hits and claims to be a sharpshooter.
Now I quote here my definition of functional specification, from my OP on the subject:
So, the general definitions: c) Specification. Given a well defined set of objects (the search space), we call “specification”, in relation to that set, any explicit objective rule that can divide the set in two non overlapping subsets: the “specified” subset (target space) and the “non specified” subset. IOWs, a specification is any well defined rule which generates a binary partition in a well defined set of objects. d) Functional Specification. It is a special form of specification (in the sense defined above), where the rule that specifies is of the following type: “The specified subset in this well defined set of objects includes all the objects in the set which can implement the following, well defined function…” . IOWs, a functional specification is any well defined rule which generates a binary partition in a well defined set of objects using a function defined as in a) and verifying if the functionality, defined as in b), is present in each object of the set. It should be clear that functional specification is a definite subset of specification. Other properties, different from function, can in principle be used to specify. But for our purposes we will stick to functional specification, as defined here.
Now, if we see someone shooting, say, to a distant wall, then we go to the wall, and paint targets around each shot, and then we say that the shots were targeted and well shot, then we are in full in the fallacy. But if we see someone shooting to a distant wall, then we go to the wall, and see that there were targets painted there, and that they were there before the shooting, then we can well infer that the shooter is very good. Even if we observe the targets only after the shots have been fired. This is the difference between invalid post-specification and valid post-specification. Invalid post-specification is the trick used by many neo darwinists to criticize the concept of CSI. Even Mark has used it, although in perfect good faith. It goes this way. You take a string that has come out of a random system. We know that, like any single string of that length, that particular string has probability 1/n of being "extracted", if the system is fair and has uniform probability distribution. But the point is, the string has no special property which identifies it, except for its specific sequence. So, I can take that string, and say: "See, I got a very unlikely result. That happens all the time!" This is exactly the infamous "deck of cards" fallacy, which many neo darwinists regularly use against ID. The point is, what I got is an extremely likely result: a random string with no special property, except its specific sequence. Can I use that sequence to specify a function? Sure. I can define my function as "any string which has thos particular sequence". Is that a valid specification? Yes, but only if I use it as a pre-specification, because the probability of getting again that particular string is extremely low. Mark tried something like that when he tried to define a function for some random numbers, saying that they pointed to specific items in some catalogue (I don't remember exactly what). In this way, he was trying to use the sequence already obtained to specify something after having obtained the string. He was making the sequence functional after having obtained it. The correct way to deal with the problem, instead, is: what is the probability of getting a random number (not too big) which points to some item in some catalogue? And the answer is obviously: "very high". Obtaining the same number again, instead, has a very low probability. But functional specification is completely different. If a protein coding gene codes for a very efficient protein, let's say an enzyme, which can accelerate a biochemical reaction beyond any natural rate, that is not a target which ia am painting after the protein has been observed. It is a target that I am observing after the protein has been observed. I see the protein working, and I know that the target has been found. But the target exists independently. For example, a biochemist can study a system and find some new enzymatic activity. Let's say that he isolates the protein, and verifies that it is really the responsible of the enzymatic activity. So he defines the activity, and says that the protein is functional, and can do that particular thing. Note that, to do that, IOWs to define the function, even to establish how to measure it, and possibly useful thresholds of activity for a biological context, the researcher has no need to know the AA sequence of the protein. Why? Because the observation and definition of the function is completely independent from any knowledge of the digital sequence which implements it IOWs, the researcher is not painting a target around the shot. He is only observing that the shot has hit a well defined target. That's why functional specification can perfectly and validly be used as a post-specification. The function objectively generates a binary partition in the search space. That partition is independent from any knowledge of what sequences can implement it. Getting a sequence from the functional partition, if it is really small, is always unlikely. Observing unexpected hits of such extremely small target spaces is something which needs an explanation, and cannot be explained as a reasonable effect of random variation. I hope that answers your "Texas sharpshooter" objection. gpuccio
Keith s 469 You appear to be quoting from Dembski. Could you refer me to the pages that contain that summary? Silver Asiatic
KF, Thanks for the response. A couple of thoughts in response. First, would you mind giving us your explicit calculation? What's the equation? I ask because yours seems quite different from Dembski's, and the specific point I was exploring was gpuccio's assertion that CSI is simple and consistent. The fact that no two people seem to be using the same formula (or acronym!) suggest to me that it is neither. In particular, your usage doesn't seem to address P(T|H), which is obviously a significant part of the CSI calculation. I could be wrong about that. My math skills are quite poor, so I may be missing an important part of your explanation. If you are using P(T|H), how are you calculating H? If you are not using P(T|H), why not? Second, thank you for pointing out the Orgel quote. I saw it before, and frankly I suspect it's quote-mining. He uses the word "complexity," but I don't think it supports the desired inference that he was talking about the same kind of "complexity" you are. The language he uses suggests very strongly that he does not define "complex" in the same way as Dembski. He seems to be using a fairly conventional definition, in which a simple and homogeneous object is not "complex." Dembski's probability-based definition is very different. I think the assertion that Orgel's work is substantially in accordance with Dembski's is therefore wrong and misleading. Barry Arrington seems to have been mislead, for one; he asserted very confidently that Orgel "uses the terms complex and specified in exactly the sense Dembski uses the terms." The language you quoted doesn't support that belief. But perhaps you've read the Orgel paper, which I have not. Does he, in fact, use a different definition of "complex"? Learned Hand
kf, I am entertained that you view Durston et al as using "the world of life as a long running Monte Carlo that susses out what works, what is flexible, what isn’t". As I noted to you on your 'elephant in the room' thread, YOU should be concerned at this usage because, to be a valid Monte Carlo run, there must be no intervention. You and I should both be concerned that n=1 is a [cough] rather low n for Monte Carlo. Finally, I don't see the probabilities that arise as being of much practical use, since Durston analyzed the observed sequence variation in extant, optimized sequences. This tells you very, very little about the size of the target space, and (because of the way he did the analysis) absolutely nothing about the existence of correlations between positions. This is what the author had to say on that subject: “…as noted in our paper, each site in an amino acid protein sequence is assumed to be independent of all other sites in the sequence. In reality, we know that this is not the case.” [cited by bornagain77, emphasis added] As I have pointed out to you previously, any bit-counting method (including BTW every one of your examples) assumes that there is no correlation between the positions, that is , it assumes independence. I note, in this regard, that you have never responded to my question (posted on the elephant thread), viz: You have already admitted that you assume independence and that this assumption is incorrect (“in info contexts reduce info capacity”), but you have asserted that this error is “not material”. How big is the error? How do you know? Please be as precise and concise as you can. DNA_Jock
Gpuccio, Thank you for the kind words. Of course, I too am one of the unbanned. I have my own theory about what precipitates banning here at UD; as you note, there is something rather capricious about it. I think pretty much all chat boards suffer from the “more heat than light” problem and the irrelevant clutter problem, to a greater or lesser degree. TSZ is far from perfect in this regard, but UD is far worse: witness vishnu’s reported failure to notice, a mere 8 comments up-thread, that his interlocutor had been gagged. Obviously, this problem is made worse by people who start new threads because they feel like it, or who post supposedly seminal, comments-closed pontifications. In our previous conversation (which I guess has now migrated here, mea culpa), you stated that you had addressed the Texas Sharpshooter problem. But reviewing your cited comments (146 and 149 on that thread), I find that you do not address the fallacy of the post-hoc specification, but rather you embrace it, writing “dFSCI is defined for a specific function, not for all possible functions, and not for the object.” On this thread you repeat this position, stating “I compute dFSI only for an explicitly defined function. If I don’t recognize a function, I cannot compute dFSI nor make a design inference.” This IS the Texas Sharpshooter fallacy. I commend you for your honesty, and for being smart enough to see that encryption completely defeats design detection. Many design-proponents have yet to cotton on to this one. Unfortunately, encryption is merely an extreme example of a more general problem for DD: context is an essential input. Although I disagree with your conclusions, I applaud your effort to try to understand the complex role of RM+NS in the evolution of biological sequences. You, at least are making the effort. It’s your choice, but I honestly believe you would be happier if you moved. DNA_Jock
Adapa, You are incorrect: Joe was not banned from TSZ for posting links to porn. He was placed in moderation for posting a link to a close-up of female genitalia; he was banned when he refused to promise not to do it again. Nothing if not classy, that Joe. DNA_Jock
gpuccio: Why is everyone so eager to invite us to TSZ? Adapa: Because unlike UD posters at TSZ don’t get silently banned merely for presenting dissenting opinions. I’m sure you and the rest of the ID proponents are quite content in your safe snuggly little pillow fort here. But the real scientific world is out there
Bwahahaha! Thanks for that one! That made my day The delusions of some people Vishnu
Great, keith s pollutes this thread with his tripe. CSI is not circular. Detecting design is not useless. Just because keith s can twist reality doesn't mean reality is twisted. Joe
Adapa:
They can’t make a case here when they’ve been banned.
LoL! They were banned for not making a case, duh.
Hundreds of ID critics have been banned at UD over the years and now it’s started again.
For a good reason
You are the only person in the history of TSZ to be banned there
That is because most people avoid it or don't even know it exists.
and that was for posting links to porn.
Liar. I guess lying makes you feel better, though. Joe
Cross-posting this from the other thread:
Joe G:
Asking for help from IDists- Richie has been booted but his ghost spews on. Richie sed:
In the discussion* it shown that Barry wanted a demonstration of CSI being made by natural forces, whilst Demski defines CSI as only to be ‘counted’ in the absence of them.
Can anyone reference Denmbski saying that or anything like that?
Joe, you really should learn more about ID. Yes, Dembski says exactly that. His CSI equation contains a P(T|H) term, and he describes H as follows:
Moreover, H, here, is the relevant chance hypothesis that takes into account Darwinian and other material mechanisms.
Silver Asiatic:
I have never seen that from Dembski and I suspect its a misreading. By taking his conclusion “only intelligent agency is known to produce it” as if it was the premise, he is criticized as giving a circular argument.
It’s not a misreading, and yes, Dembski’s argument is hopelessly circular. This is why scientists laugh at ID. One of its leading lights, the so-called “Isaac Newton of information theory”, is making a freshman logic mistake. That makes his concept of CSI useless. Pitiful, isn’t it?
keith s
Joe If the TSZ ilk can’t make their case here why should anyone believe they can fare any better over there They can't make a case here when they've been banned. Hundreds of ID critics have been banned at UD over the years and now it's started again. You are the only person in the history of TSZ to be banned there and that was for posting links to porn. Adapa
Adapa- If the TSZ ilk can't make their case here why should anyone believe they can fare any better over there? Their garbage is garbage regardless of the forum. Sheesh Joe
keith s- Your cartoon version of CSI in your hands is totally useless and circular. ID's version of CSI is also useless in your hands but it is far from circular. CSI is a hallmark of intelligent design because all observations and experiences demonstrate only intelligent agencies can produce it. And if we ever observe some other process producing it then CSI will cease to be a hallmark of design. Joe
gpuccio Why is everyone so eager to invite us to TSZ? Because unlike UD posters at TSZ don't get silently banned merely for presenting dissenting opinions. I'm sure you and the rest of the ID proponents are quite content in your safe snuggly little pillow fort here. But the real scientific world is out there, in scientific journals and laboratories and even in uncensored science blogs. If your goal is only to backslap other ID proponents you're doing great. If your goal is to present a positive case for ID to others you're not in the game, not even on the bench. You're hiding in the darkest corner of the locker room. Adapa
The most laughable thing about Dembski's CSI is that even if it were measurable, it would still be useless. The argument from CSI is hopelessly circular. keith s
Learned Hand
Obviously “they don’t accept obviously true arguments” is a very subjective statement, and a fairly common belief.
It isn't subjective at all. Some things are obviously true. To deny them is to demonstrate one's commitment to irrationality. The legitimacy of empirical investigation and abstract reasoning are both based on obvious truths. StephenB
PS: In the linked you can read Orgel for himself. He was speaking qualitatively but in a way that was door opening. kairosfocus
LH: The dFSCI would relate to the coded genes involved in the unique proteins used, and on the simple case runs at 6 bits per codon, three two bit/four state elements known already to be functionally specific. With a typical protein at 250 AAs, we would have 750 bases, or 1500 bits on a simple direct metric. With, what, 30 proteins or so, giving us an order of magnitude back of envelope value probably good enough for Govt work. . If you think it vital to reckon with the 20-state AAs which come in via multiple R/DNA codes coding for the same AA plus the three stop codons, you can adjust on the protein values at 20-state or 4.32 bits, and multiply up. Any AA can follow any AA so it is arguable that there should be freedom. In praxis, only isolated AA chains in the possibility space fold, function and do the right function. If you confine yourself after the fact to protein chains you look at relative frequencies and flexibilities of AAs in given positions [there is redundancy in the genetic code for that . . . ]. Durston et al did that, using the world of life as a long running Monte Carlo that susses out what works, what is flexible, what isn't. Their results are as published in 2007. But you already can see how to look at proteins, and info content to get to them. In aggregate far beyond the 500 - 1,000 bit threshold that swamps out sol system or observed cosmos atomic resources. KF kairosfocus
KF, that's interesting if a little opaque. Can you show me how to calculate dFSCI for the bacterial flagellum? I'd like to see how that compares to Dembski's calculations. Is it the same process, and should it return the same result? Is that result calculated in the same units? Also, perhaps you can answer a question I had a while back. Is it true that Orgel defined "complex" relatively colloquially, as opposed to "simple"? Because Dembski seems to define it as "unlikely," which drives quite a wedge between Orgel's work and Dembski's. Only one of them would define Kubrick's monolith as "complex." Learned Hand
Nullasalus,
I think it’s pretty easy to come up with a narrative, actually. UD only allows dissenters to post whose arguments are easily dismantled. People whose claims are not just wrong, but can be shown to be wrong with such ease that – despite said people being anti-ID – they are nevertheless a net burden on their own cause. To allow them to speak is to allow a kind of smearing of ID critics.
Thanks for making the effort to perceive the other side's thinking, rather than assuming the worst. I don't think this is quite accurate, though; personally, it seems plain that having a limited number of vigorous opponents is good for blog traffic. Learned Hand
LH: Pardon, but you are repeating misinformation. For starters, I would go here and onwards. For instance, the digitally coded functionally specific complex info we find in text strings such as posts, in source and object code for computers etc, and in D/RNA -- dFSCI for short -- is an obvious example that can start from the usual metric of number of chained y/n q's to specify the state as is commonly reported in file sizes on the pc. More sophisticated comms theory developed metrics are longstanding back to Shannon's H as SUM of pi log pi, and onward to work by Yockey et al and latterly Durston et al building on IIRC Szostak et al. It turns out that 3-d functionally specific complex organisation can be reduced to similar code strings of y/n q's, as AutoCAD etc do so discussion on strings is WLOG. So, we talk of functionally specific complex organisation and associated information, FSCO/I. This is linked to Orgel and Wicken in the 1970's trying to single out a key feature of cell based life that distinguished it from randomness and order. Onwards, we can address Specified complexity and complex specified information as supersets of the more relevant functionally specific stuff. KF kairosfocus
I agree with you: the happy celebrations of “victory” at TSZ are, at best, childish. I am happy for them that they can be happy with so little. Some of the people there are intolerable. Well, they probably feel the same for some of the people here.
I'm sure they do! I think they also feel the same about the celebrations of "victory" at Uncommon Descent. It's not uncommon for posters here to crow about the imminent death of materialism or Darwinism, just as creationists have been predicting confidently for longer than any of us have been alive.
But that is not really the point. The really sad point are the good people there, those who are intelligent and honest. In them, the cognitive bias of an unacceptable dogma which has really “won” in the scientific thought of the last decades is truly apparent. Not because they think and believe differently from what we do. That is simply to be expected. Plurality of thought is richness, and I never expect to convince anyone. The sad point, IMO, is that they are blind to obviously true arguments, of which they should at least understand the value, even if not agreeing on the conclusions.
This, too, could have been written by someone at TSZ or ATBC (except for the bit about the other side being dominant in the sciences). Obviously "they don't accept obviously true arguments" is a very subjective statement, and a fairly common belief. Personally, this is why I focus so heavily on testing and empirical success. That's one difference between the two sides. Someone posted just a little while ago a link to some research into the efficacy of phylogenetic analysis. Scientists tested whether phylogenetic analysis worked by actually doing it and analyzing the results against known relationships. They didn't just declare that it works, insult the skeptics, and demand that someone else prove them wrong. It's easy to say that the other side just doesn't listen/understand/believe/honestly engage or whatever the criticism du jour might be. What are the objective differences between the two camps? ID's inability to demonstrate that its core ideas work, while simultaneously declaring that they do, is the most obvious to me. Coming in a close second is the alignment of interests. Scientists are incentivized to come up with the most accurate novel ideas; no one becomes the next Darwin by defending the status quo. And ineffective or inaccurate ideas get pushed out in favor of new paradigms. That… doesn't seem to happen in the ID movement. When has the movement ever scrutinized and rejected one of its principles? Even Dembski, upon realizing that people were making fun of him for abandoning the explanatory filter, retracted his own criticism of it. Ideological purity seems prized over accuracy in general.
But no, they probably understand that they even admit for a moment that functional information exists, and that it can be objectively measured, they are doomed.
I would be very surprised if this were true. I'm not a mathematician, as I said. I'll accept arguendo that such a thing as "function information" exists; I have no reason to say otherwise. But I don't see it actually being calculated. I see it being used as a shibboleth: this flagellum has more than 500 bits of functional information. No, you can't see my math. Nothing that looks like an "objective" measurement, especially given the need to calculate odds of non-design hypotheses. One reason I find the concept of CSI so risible is that it seems totally ineffectual in persuading actual mathematicians. A few, who were already IDists and religiously incentivized to accept its conclusions, say it's the greatest thing since sliced bread. But what about the community of mathematicians, information scientists and computer scientists? Are they all dyed-in-the-wool Darwinists who refuse to see how obviously correct, strong, consistent, and beautiful CSI is a concept? Why has it made no inroads with them if it's so effective?
Instead, what do we see? Only painful attempts to deny a concept which is obvious even to a child, or to show that any attempt to deal with it scientifically cannot work.
My niece believes that her father is the strongest man in the entire world. Many things that are obvious to children are not actually true.
It is like trying to say that as far as gravitation cannot be fully understood, or measured with a precision which is maybe at present not possible in some extreme cases, it simply does not exist.
Except, of course, that the heretical gravitational science theorist could demonstrate the truth of his ideas by dropping a cannonball and a feather and timing their descent. Where are the empirical demonstrations that ID can actually distinguish design from non-design? Why is the response always a variation of, "We don't need to do that," or "You need to do that for us"? Shouldn't it be the first thing on Dembski's to-do list? Learned Hand
“If I’m wrong, would you please point me to consistent demonstrations of actual CSI calculations?” My post #400 here is a very simple example. Dorston’s paper is another one.
I'm no mathematician, but I don't see actual calculations of CSI in your post. You're declaring an amount of "minimum functional complexity," but is that the same thing as CSI? Why not, if CSI is such a strong, consistent concept? I was under the impression that calculating CSI required, inter alia, knowing the odds that something would come about by chance alone. What are those odds for a Shakespeare sonnet? A flagellum? I'm looking for hard numbers, if possible; I don't think it is, which is one reason why I think CSI is such a weak and ineffectual tool.
I have met a long challenge from the guys at TSZ where I volunteered to make design inferences on any string submitted by them or by anyone else, and nobody was able to submit anything which could even remotely cause a false positive. I repeat my challenge to you now. Please. offer any string and I will make explicit design inferences according to my explicit procedure, many times illustrated here. It will work with 100% specificity.
I'm fascinated by this approach. In essence, IDists declare that they have a tool that would revolutionize the sciences and enshrine their names in history. But it doesn't need to be tested. Barry Arrington insists that other people test it for him; you're willing, but only if someone else provides the test for you. If I had such a powerful tool to prove to doubters, I think I'd be jumping at the chance to do my own tests and demonstrate its efficacy. I'm afraid I don't have an alphanumeric string for you, although I'll try to think of one. In the meantime, since CSI supposedly can be used to distinguish between design and non-design in biology, what's the CSI of an Ebola virus? I listen to a lot of Alex Jones (as research for a writing project) and they're all on fire about how the present strain Ebola was designed as a bioweapon in a secret CIA laboratory, etc. If design detection worked the way IDists claim it does, I would love to use it to refute such nonsense. So let's try! I understand the calculation will necessarily be subjective, but we can give it a good effort I hope. Clearly I don't expect it to work, but I do expect the failure to be informative. You may not have all the information you'd need to calculate CSI for the Ebola virus. If not, then what information would you need? Please be specific. And if you're calculating something other than CSI (like one of the other acronyms that get tossed around as a substitute), please explain why not CSI given that it's supposedly a strong, simple, clear and beautiful concept. Learned Hand
Why is everyone so eager to invite us to TSZ? Guys, with all the respect for your places, this is our blog, we like it here, this is a place for those who believe in the value of ID theory, and we believe in it, in case you have not noticed. So, we are happy to stay here. And it is not easy to blog in two places at the same time. I can say that out of personal experience! Next thing, you will invite us to join you at ATBC! :) gpuccio
gpuccio: The problem of encryption is not a problem, for my dFSCI procedure. As you probably know, I compute dFSI only for an explicitly defined function. If I don’t recognize a function, I cannot compute dFSI nor make a design inference. IOWs, the result of the procedure is a negative. It can be a true negative or a false negative, but not a positive.
Exactly. Encryption deliberately hides functional information by mixing randomness via keys and algorithms to the original. Of course one cannot reasonably detect functional information if the encryption is strong. The entire point of encryption is to hide the functional information. Likewise, I cannot see the sun if someone puts her hands over my eyes, either. Vishnu
keiths: Well, then by all means come over to TSZ and teach us a thing or two at TSZ. With your awesome skills, it should be a cakewalk for you.
Thanks for the invite, but I'll pass. Vishnu
DNA_Jock: Asking someone who has been gagged a question makes you look a fool.
I didn't know he was gagged at the time.
Ooops, I messed a blockquote – the final paragraph is mine, above it is Vishnu at 434. I think it’s obvious from the context….
Right. Vishnu
nullasalus: I don't agree really with what you say. While I am not an expert of banning at UD or anywhere else, I think that many good people who make good arguments (as far as it is possible :) ) against ID have been posting undisturbed for years. Mark Frank is a good example (although maybe he was banned for some time, I am not sure. In that case, I certainly disagree with the banning). DNA_Jock is a very good interlocutor. So was Piotr (I haven't seen him for some time, I hope he was not banned :) ). Others, like wd400 or Alan Fox, have their peculiarities, but nothing that could justify banning, at least IMO. And I really miss Petrushka: irritating, but stimulating and, in his own way, honest. Keith is borderline. He has made one, or maybe two, "arguments" (OK, I believe they are very silly, but they are arguments just the same). But he is very childish in debating them, and certainly lack completely any respect for other debaters. However, I can still tolerate him or just ignore his posts. Rich, on the other hand... Well, you know the saying: if you have nothing good to say about someone, just keep your mouth shut! gpuccio
How is that explained under keith’s narrative? As far as I can tell, rich has offered no argument whatsoever – only invective and insults. Keith (by his account) has delivered a “bomb” that destroys ID. Wouldn’t keith be the first person to be shown the door?
I think it's pretty easy to come up with a narrative, actually. UD only allows dissenters to post whose arguments are easily dismantled. People whose claims are not just wrong, but can be shown to be wrong with such ease that - despite said people being anti-ID - they are nevertheless a net burden on their own cause. To allow them to speak is to allow a kind of smearing of ID critics. Thus far, Keith has been allowed to speak. Shall he name himself as exhibit A? nullasalus
DNA Jock, I sometimes comment at TSZ, but I never bother to attempt to have a proper debate with anyone there, because frankly, there is not much to learn there, and the quality of responses is so infantile and lacking of any depth, that I only do so to highlight the (lack of) quality of the evolutionist side. Their responses sound like angry robots, not thinking people. I am sure any bystanders will notice this right away. phoodoo
DNA_Jock: I am not so eager to discuss with Rich :) With you, it's all another matter. The problem of encryption is not a problem, for my dFSCI procedure. As you probably know, I compute dFSI only for an explicitly defined function. If I don't recognize a function, I cannot compute dFSI nor make a design inference. IOWs, the result of the procedure is a negative. It can be a true negative or a false negative, but not a positive. In the case of language, if I know that the string is encrypted, and know how to decrypt it, then I can define a function (like "meaning in English after this type of decryption), and compute the dFSI. If I am not aware of the encryption, I will probably consider the string non functional, and it will be a false negative. No problem there. For prescriptive information, it is easier. Prescriptive information must be sufficient to implement the function objectively. So, if we have a software which is zipped, the functional unit will be the software plus the decryption software. In the case of proteins, it is easier still. Proteins are encoded in the same way, through the genetic code. So, we have no problems of encryption there, other than the generic coding that we know. In a sense, all protein coding genes, to be really functional, require the transcription and translation system (and many other things). But, for the sake of simplicity, I usually refer only to the protein encoding sequence. That's usually more than enough for my argument. gpuccio
Ooops, I messed a blockquote - the final paragraph is mine, above it is Vishnu at 434. I think it's obvious from the context.... DNA_Jock
vishnu@439
I have (I think a pretty open mind), no dog in the fight, a graduate degree in microbiology (with a special interest in bacteria), and a PhD in computer science, and I have never been so unimpressed in my life as I have been with those guys from the “skeptical zone.”
You have a Master's in microbiology? Cool, my wife got hers from WSU, where did you get yours? Sad though that all that further education didn't help you avoid this mis-step: @425
Keith s Rich’s comments are disappearing, so he won’t be able to respond to you, StephenA. Moderators, which one of you is responsible? Why are you so frightened of open discussion?
@433
vishnu:
Rich:
Think harder, Vishnu. Think about what domains this test now can’t work in.
If you have a point to make, make it. @434
Vishnu:
I didn’t think so But for those listening, I’ll say this… You cannot detect CSI (or whatever formuation) when it’s been encryption, this is, randomized, but some randomization process. Why? Because it’s hidden. I cannot see the sun or the stars with my eyes if someone is holding their hand over my eyes. Duh. Rich is an idiot and should not be acknowledged. Asking someone who has been gagged a question makes you look a fool. Drawing inferences from their silence, worse. The good news: over at TSZ you won't run into that problem. Yourself and gpuccio are welcome to come over for a chat. Various UN commenters - WJM, phoodoo, ericB, upright biped - have made the trip. DNA_Jock
LH:
Do you recall the Mathgrrrl debacle, when UD fell into angry, acrimonious chaos when asked to calculate CSI?
You have a twisted memory of what actually happened. CSI works, deal with it. CSI says that living organisms are designed. And I am very comfortable with the fact that inference will never be overturned. And it bothers you that your sad position has nothing, not even a methodology. Joe
keith said:
Rich has been banned, Vishnu. He can’t make a point, because the UD moderators, who are frightened of open discussion, won’t allow him to.
Notice again how keith's use of negative personal characterizations towards his opponents feeds his self-aggrandizing narrative. He thinks UD is "frightened" of open discussion, instead of unwilling to tolerate rich's incessant sniping and sneering that comes with virtually no argument content whatsoever. One wonders, if UD is so frightened of open discussion, why would Mr. Arrington grant amnesty in the first place? If UD is afraid of contending with actual argument content that, in the minds of anti-ID advocates, "blows up" our position, why wasn't keith the first person banned (considering he's convinced he has destroyed the ID argument and thinks virtually everyone is "afraid" to respond to him)? Keith's narrative here makes no sense. Here you have about a dozen people taking keith's argument head on (including myself) to one degree or another, suffering through his ongoing, self-serving, self-aggrandizing narrative and negative characterizations of others, and yet, he's still here. How is that explained under keith's narrative? As far as I can tell, rich has offered no argument whatsoever - only invective and insults. Keith (by his account) has delivered a "bomb" that destroys ID. Wouldn't keith be the first person to be shown the door? A better explanation is that UD enjoys the presence of those who provide actual argument (even if it is lousy argument) and will only tolerate those who offer virtually nothing other than invective and sniping for a short while. William J Murray
Vishnu: I agree with you: the happy celebrations of "victory" at TSZ are, at best, childish. I am happy for them that they can be happy with so little. Some of the people there are intolerable. Well, they probably feel the same for some of the people here. :) But that is not really the point. The really sad point are the good people there, those who are intelligent and honest. In them, the cognitive bias of an unacceptable dogma which has really "won" in the scientific thought of the last decades is truly apparent. Not because they think and believe differently from what we do. That is simply to be expected. Plurality of thought is richness, and I never expect to convince anyone. The sad point, IMO, is that they are blind to obviously true arguments, of which they should at least understand the value, even if not agreeing on the conclusions. As I have tried to argue with Learned Hand, for example, their denial of the very simple problem of functional information is intellectually suicidal: even many of the best neo darwinist biologists understand clearly that functional information in biology is a problem, and try to deal with it as well as they can (not very well, indeed!). Why do they think that Szostak produced his deeply flawed paper? But no, they probably understand that they even admit for a moment that functional information exists, and that it can be objectively measured, they are doomed. The only intellectually correct approach would be: OK, I understand that functional information exists, but I believe that you are wrong about how to define and measure it. I have my good proposals for doing it otherwise. Instead, what do we see? Only painful attempts to deny a concept which is obvious even to a child, or to show that any attempt to deal with it scientifically cannot work. It is like trying to say that as far as gravitation cannot be fully understood, or measured with a precision which is maybe at present not possible in some extreme cases, it simply does not exist. gpuccio
Learned Hand: "If I’m wrong, would you please point me to consistent demonstrations of actual CSI calculations?" My post #400 here is a very simple example. Dorston's paper is another one. I have met a long challenge from the guys at TSZ where I volunteered to make design inferences on any string submitted by them or by anyone else, and nobody was able to submit anything which could even remotely cause a false positive. I repeat my challenge to you now. Please. offer any string and I will make explicit design inferences according to my explicit procedure, many times illustrated here. It will work with 100% specificity. gpuccio
Vishnu,
I have (I think a pretty open mind), no dog in the fight, a graduate degree in microbiology (with a special interest in bacteria), and a PhD in computer science, and I have never been so unimpressed in my life as I have been with those guys from the “skeptical zone.”
Well, then by all means come over to TSZ and teach us a thing or two at TSZ. With your awesome skills, it should be a cakewalk for you. See you there! keith s
gpuccio, I don’t like rudeness, but I can tolerate it, if it comes with detectable arguments. If it is only rudeness, without any content, I usually simply ignore it. A fine approach. I am less good (although I try) in tolerating tricky arguments and statements. For example, defining CSI and its subsets as “infinitely elusive and mutable” is really unfair: it is a very strong and clear concept. I could not disagree more. CSI is not a "strong" concept; it does not work. How can a concept be "strong" when its advocates claim it has tremendous real-world power to detect design, but cannot actually use it successfully? Let's be clear about an assumption that I'm making: CSI has never been applied successfully to detect design in a controlled test. Not once. I'm not even convinced that it's been accurately calculated (or could be accurately calculated) in the paltry toy examples its advocates have substituted for such testing. In any event, I don't see what makes it "strong." It doesn't do anything in the real world, and only detects design when the user starts with either the knowledge or the assumption that design exists. (And, as I've written before, I believe that even CSI's main advocates lack faith in its power. Otherwise, why not test it and prove that it works? I think the obvious answer is that they know it can't be applied in the real world to detect design; testing it would be an unnecessary embarrassment given that ID's followers don't seem to care whether the design detection toolkit actually functions.) Calling it "clear" is equally ambitious. Do you recall the Mathgrrrl debacle, when UD fell into angry, acrimonious chaos when asked to calculate CSI? I don't believe that two different IDists, given identical real-world subjects, would either perform the same CSI calculation or come up with the same numeric result. In what way is that "clear?" So, you can say that CSI is a wrong concept: you are entitled to your own opinion, like anyone else. You can say that there are slightly different definitions and slightly different explicit procedures. That is true of practically everything in science. I will believe that the "explicit procedures" are only "slightly different" when there is an explicit procedure for calculating CSI in real-world objects, and IDists can apply that procedure reliably and consistently. I don't think that has ever been the case. If I'm wrong, would you please point me to consistent demonstrations of actual CSI calculations? Learned Hand
I have (I think a pretty open mind), no dog in the fight, a graduate degree in microbiology (with a special interest in bacteria), and a PhD in computer science, and I have never been so unimpressed in my life as I have been with those guys from the "skeptical zone." I read a few of their posts on the "sketpical zone" and they seem to think they "won." Quite bizarre. Well, all I can say is let the readers decide. Those guys are far from formidable. They might be just this side of stupid. But I'll let others decide. Vishnu
If Rich had a point and made it, he wouldn't have been banned. Rich never has a point beyond being the jester and cheering section. keith s is so frightened of an open discussion that he has to ignore and remain willfully ignorant of all of the arguments that refute his claims. Joe
keiths, well since he digs you so much you can make the point for him. But I think I said what would have been my reply anyway, so whatever. Cheer Vishnu
Vishnu, to the empty chair Rich was sitting in:
If you have a point to make, make it.
Rich has been banned, Vishnu. He can't make a point, because the UD moderators, who are frightened of open discussion, won't allow him to. keith s
Pardon the sloppy writing. Editing please, ops :) Vishnu
I didn't think so But for those listening, I'll say this... You cannot detect CSI (or whatever formuation) when it's been encryption, this is, randomized, but some randomization process. Why? Because it's hidden. I cannot see the sun or the stars with my eyes if someone is holding their hand over my eyes. Duh. Rich is an idiot and should not be acknowledged. Vishnu
Rich: Think harder, Vishnu. Think about what domains this test now can’t work in.
If you have a point to make, make it. Vishnu
Learned Hand: I don't like rudeness, but I can tolerate it, if it comes with detectable arguments. If it is only rudeness, without any content, I usually simply ignore it. I am less good (although I try) in tolerating tricky arguments and statements. For example, defining CSI and its subsets as "infinitely elusive and mutable" is really unfair: it is a very strong and clear concept. It has obviously some aspects which require more detailed discussion, but the concept itself is simple, beautiful and consistent. If Alan Fox, who is usually a good commenter, can write to me: "Saying some things clearly exhibit CSI without explaining which things and how this is established is not saying much." when he knows perfectly well how many tons of posts I have dedicated to this issue, is really strange. I can understand that he does not agree with my explanations, but how can he even suggest that I have not given them? So, you can say that CSI is a wrong concept: you are entitled to your own opinion, like anyone else. You can say that there are slightly different definitions and slightly different explicit procedures. That is true of practically everything in science. However, the basic concepts that we can find in Dembski's explanatory filter are essentially always the same. But saying that CSI and its subsets are "infinitely elusive and mutable" is simply false and unfair. gpuccio
He basically claimed I was lying because… because I told him something he hadn’t heard before I guess. I didn't read his comment that way. I think he was saying that you made up a new aspect of the infinitely elusive and mutable [CSI | FSCO/I | dFSCI]. I’m not claiming that other pro-ID commenters aren’t being insulting and rude, but I don’t speak for them, and they don’t speak for me. Alright. I thought you were making a broader statement when you asked, "why do you think [sneers and mud-slinging] have any place here?" Learned Hand
Those who have been silently banned or moderated -- please post a comment on this thread at TSZ so that we will know the extent of the problem. Thanks. keith s
Learned Hand: Rich wasn't talking to Mapou or Joe or even Barry when he claimed I just making stuff up. Do you think that because someone else was rude to him it justifies his being rude to anyone else? He basically claimed I was lying because... because I told him something he hadn't heard before I guess. I'm not claiming that other pro-ID commenters aren't being insulting and rude, but I don't speak for them, and they don't speak for me. I do ask that the people that are rude to me do justify their behavior. Feel free to ask the same of anyone you speak to. StephenA
Rich and william spearshake:: As I am in a generous mood, I will explain you why I am so certain of my statement in the previous post. How many known languages are there? I have no idea. But let's suppose there are a billion. So, if the functional specification is "having perfect sense in any known language", instead of ""having perfect sense in any English", how bigger becomes the target space? The answer is simple: 30 bits bigger. In my example, the search space is 3000 bits. The difference is irrelevant. gpuccio
Rich and william spearshake: "What about an encrypted Sonnet?" It's specified just the same. StephenA's answer is perfectly correct. I must remind you that any function can be defined for an object, and the dFSCI is computed for the specified function. The concept is that is a function, any function, can be defined for an object which implies a very high specific complexity, we can infer design. That statement is true, and it could be easily falsified in a Popperian sense: just give me a 600 character long string, encrypted if you prefer, which has a perfect sense in any known language, and which has been generated by a random character generating software (with the evidence, I will not believe you on your word alone! :) ). gpuccio
If not, why do you think they have any place here? Why would a reader, having seen Barry Arrington's snide posts or the abusive comments that pour out from UD regulars like Mapou and Joe, conclude that sneers don't have a place at Uncommon Descent? It would be nice if commenters on both sides ratcheted the rhetoric down a notch (or, in some cases, many notches). I don't know how anyone could pretend that "sneers and mud slinging" come from only one side. Learned Hand
Rich's comments are disappearing, so he won't be able to respond to you, StephenA. Moderators, which one of you is responsible? Why are you so frightened of open discussion? keith s
Rich: Why do you feel the need to add sneers and mud slinging to your comments? Were insults instrumental in your acceptance of evolution? Were you shamed into accepting it? If not, why do you think they have any place here? StephenA
A new concept - HIDDEN FIASCO. Making stuff up is easier than science. StevenA, there are many languages. Are they somewhat arbitrary? Rich
And then if you run the zero-FCSI string through a decryption program, out pops a high-FCSI string. Did the decryption program produce FCSI? Or was the FCSI calculation wrong when it returned 0?
If you have both the string and the method to decrypt it, then you know that the string does have a specific function (i.e. can be decrypted into english text), and therefore the FCSI calculation should return a much higher value. The FCSI was hidden, not absent. Basically, the FCSI calulation was wrong when it returned 0.
How do you test the claim that nature cannot produce CSI if you have no way of determining whether the initial state of no-CSI is a false negative or not?
We can't be absolutely sure that the initial state of any test does not already contain hidden FCSI. This is the nature of science. All conclusions are held provisionally, with the understanding that we might be wrong and that new knowledge may upend our theories. That said, would you like to offer an example of such a test so that we don't accidentally talk past each other? StephenA
StephenA:
Unless you know ‘decryption method X’ you will not know of any specification that the new text matches so the FCSI calculation will return 0.
And then if you run the zero-FCSI string through a decryption program, out pops a high-FCSI string. Did the decryption program produce FCSI? Or was the FCSI calculation wrong when it returned 0?
This is fine, since we already know that the design inference often gives false negatives.
How do you test the claim that nature cannot produce CSI if you have no way of determining whether the initial state of no-CSI is a false negative or not? R0bb
Ladies, Ladies, no need to fight. Keith S has read it. Mung hasn't read it, nor all the thread (Funny that you driveby even when at home, Mung). Keith S has provided some choice quotes that make the OP look silly. Mung has Butthurt. Rich
keiths:
You have read it, haven’t you?
What part of no, i have not read it, do you not understand? But I did read the book announcement, and according to you, I cannot tell the difference between the book and the book announcement, so why do you ask? Mung
Rich,
And I do love Keith. Did you know I once wrote a limerick about him and Hempel’s Paradox?
Yeah, and it didn't even scan properly. keith s
Think harder, Vishnu. Think about what domains this test now can't work in. Rich
Mung, I understand how now, with egg on your face (again), you'd like to change the subject. Your Butthurt is delightful. And I do love Keith. Did you know I once wrote a limerick about him and Hempel's Paradox? Rich
Mung, I have read the book, and it confirms what I gleaned from the interviews. Wagner's research is bad news for ID. Now, for laughs, let's see you try to spin Wagner's book into good news for ID. You have read it, haven't you? keith s
It's simply endearing how Rich genuflects to keiths. Mung
Rich: "So what are the implications for ID, Vishnu?"
None. Vishnu
Alan Fox:
It’s my contention [whatever] cannot be quantified until it is operationally defined.
Alan Fox doesn't know what it means to be a skeptic. Alan, please provide us with the operational definition of a "skeptic." What are the operation boundaries of "the skeptical zone" and how do you know? Mung
Mung: "This is simply hilarious. keiths has not read the book!" This is simply hilarious. Mung has not read the Thread! "217 keith s November 1, 2014 at 11:59 pm I downloaded the book onto my Kindle, and sure enough, it confirms what I gathered from the interview." Have you read it? Then perhaps you should be quiet and listen to those that have. Rich
"to test our methodologies" Oh good, you're ready to test 10 data sets that I give you then, some of which will be designed, some not. Rich
keiths:
Also, since it doesn’t appear to have dawned on any of the IDers here, Wagner’s ideas do not bolster the ID position at all. They undermine it.
This is simply hilarious. keiths has not read the book! His argument consists of, I don't need to read the book, I read an interview! Well congratulations keiths, you can't tell the difference between a book an an interview. Mung
The implications for ID? For one, that there is more to living organisms than matter, energy and what emerges from their interactions. For another that there is a purpose to our existence beyond the mundane. Joe
Rich, you are confused, as usual. We use English language prose to test our methodologies and to provide examples to our simpleton opponents. Joe
StephenA, it seems ID is remarkably useful in detecting the design of English language prose that we can read and understand. Its also useful for... no wait... that's all. And you've made "specification" so soft and squishy and arbitrary. Great stuff. Rich
Encryption hides the design in a string by changing the text so that it no longer matches a well known specification (the English language). Instead it now matches the specification of "text that can be decoded into English text by decryption method X". Unless you know 'decryption method X' you will not know of any specification that the new text matches so the FCSI calculation will return 0. This is fine, since we already know that the design inference often gives false negatives. However, we never see false positives. If you can give an example of a false positive, that would be a falsification of ID. StephenA
So what are the implications for ID, Vishnu? Rich
What about an encrypted Sonnet?
The short and sweet answer is: An encrypted Sonnet is no longer a merely a Sonnet. Encryption introduces randomness into the data in the form of a randomized key (the more random the better) applied to the data using some algorithm (there are many.) That's why encryption works. Vishnu
Rich: "What about an encrypted Sonnet?" Please, don't start this again. That discussion resulted in Barry banning me under at least three different names. As it is, I am banned from posting from several IP addresses including home, work (don't tell my boss) and at least two of my favourite bars. I really don't want to have to find another bar to avail myself of this pseudo-amnesty. william spearshake
What about an encrypted Sonnet? Rich
Me_Think: "Do you really believe we can calculate the dFSCI ?" Yes. "I see no way to calculate the dFSCI of say, glacier or Sand. Could you explain how you calculated ?" Yes. Let's see my list. Remember, it's just a list which I made in a few moments, to adhere to a repeated request. "Sand on a beach is not designed. A Shakespeare sonnet is designed. The pattern of the drops of rain is not designed. ATP synthase is designed. A glacier is not designed. Windows XP is designed. A snowflake is not designed. Histone H3 is designed." So, I mention 4 objects or systems which do not allow a design inference because they do not exhibit dFSCI: The disposition of the sand in a beach: easily explained as random, no special function observable which requires a highly specific configuration. Can you suggest any? I can't see any, therefore I do not infer design. The pattern of the drops of rain. Same as before. A glacier: this is less random, but it can be easily explained, as far as I know, by well understood natural laws, with some random components. I am not an expert, but I cannot see in a glacier any special configuration which has a highly specific complexity which is not dependent on well understood natural laws. Therefore, I do not infer design. Could you suggest some complex functional configuration in a glacier which cannot be explained by explicit and well understood laws of physics? The snowflake I have added because it is an example of ordered pattern which could suggest design, but again the configuration is algorithmic, and its origin from law and randomness very well understood. No dFSCI here, too. Let's go to the 4 objects for which I infer design: A Shakespeare sonnet. Alan's comments about that are out of order. I don't infer design because I know of Shakespeare, or because I am fascinated by the poetry (although I am). I infer design simply because this is a piece of language with perfect meaning in english (OK, ancient english). Now, a Shakespeare sonnet is about 600 characters long. That corresponds to a search space of about 3000 bits. Now, I cannot really compute the target space for language, but I am assuming here that the number of 600 characters sequences which make good sense in english is lower than 2^2500, and therefore the functional complexity of a Shakespeare sonnet is higher than 500 bits, Dembski's UPB. As I am aware of no simple algorithm which can generate english sonnets from single characters, I infer design. I am certain that this is not a false positive. Similar considerations are good for Windows XP: here the number of functional bits is so big that no discussion is really necessary. Let's go to the two proteins. For ATP synthase, I have already computed a minimum of 1600 bits of dFSCI just for the alpha and beta subunits. You can find the reasoning here: https://uncommondesc.wpengine.com/intelligent-design/four-fallacies-evolutionists-make-when-arguing-about-biological-function-part-1/ Point 3. Finally, the H3 histone. Here the resoning is very simple. Histones are extremely conserved proteins, and have a very important function in eukaryotes: buiding the nucleosome, which is the crucial center of many transcription regulation events. Human H3.1 is 136 AA long. Blasted against the molecule of Saccharomyces cerevisiae, there are 121 identities. IOWs, the molecule is perfectly conserved throughout the eukaryotes. Therefore, we can assume a minimum functional complexity of, just to be safe, more than 500 bits. Which is beyond Dembski's UPB, And certainly far beyond the threshold I have proposed many times for biological systems on our planet (150 bits). Therefore, I infer design. Obviously, both for ATP synthase and histone H3 I am aware of no algorithmic explanation for their origin. Can you give one? gpuccio
Alan: "Saying some things clearly exhibit CSI without explaining which things and how this is established is not saying much." Are you really saying that I have never explained how to compute dFSCI? Must I repeat the same things at each post? gpuccio
Silver Asiatic I'm creating my own set of blogs linked to an online mind mapping tool (Mind Meister). Learning the blogging stuff now, which includes moderating comments. Will provide a link to the nice folks here, as per BA77's request, when it gets ready for sharing it. :) curious lurker
Dionisio - ok, thanks. Did you learn something interesting from the experiment? Or maybe it was just for fun, which is a good enough reason to try it. Silver Asiatic
Astroman
As I have pointed out in more than one thread, ID proponents say that the who, when, where, and how questions come “after design is determined”.
That's right. It's a different branch of study and analysis.
ID proponents constantly claim that they have already demonstrably determined design in most or all things in the universe,
Could you offer a reference to that claim? For myself, I have never seen it. There are aspects of the universe that are explained by natural processes and others by design. So, I find it hard to believe that ID proponents are claiming that everything in the universe shows empirical evidence of having been designed by intelligence.
They also regularly proselytize the biblical God as the designer
If you're talking about various individuals who post on blog sites, that's one thing. If you're talking about ID research, however, what you said is not correct.
but when they’re asked the who, when, where, and how questions they use every cowardly and dishonest dodge in the book to squirm out of answering those questions.
As with any academic or scientific discipline, ID has the right to define its scope of study. The nature of the designer falls outside the scope of the ID project. ID has defined boundaries. It can't answer the questions that it hasn't done the research on. Does evolution include Origin of Life studies? Is that dishonest? Silver Asiatic
curious lurker said: "Someone asked: “By whom? How? When?” Hasn’t it been stated on a number of occasions in different discussion threads that those questions (specially the first one) are not part of the core ID proposition? Why do some commenters insist in ignoring what others have written so clearly so many times? Is there a subtle agenda behind their questioning? Can someone explain this?" As I have pointed out in more than one thread, ID proponents say that the who, when, where, and how questions come "after design is determined". ID proponents constantly claim that they have already demonstrably determined design in most or all things in the universe, and they also claim that design was demonstrably determined loooong ago by people like Newton, Wallace, and many other dead people that they like to add to their ID-Creation coterie. They also regularly proselytize the biblical God as the designer but when they're asked the who, when, where, and how questions they use every cowardly and dishonest dodge in the book to squirm out of answering those questions. Astroman
gpuccio and KF, Could the following questions somehow relate to the concepts behind CSI, dFCSI, FSCO/I?
What makes myosin VIII to become available right when it’s required for cytokinesis? Same question for actin. What regulatory networks + signaling pathways determine the precise timing? How does that relate to the circadian clocks? What genes are myosin and actin associated with? What signaling pathways trigger those genes to express those proteins for the cytokinesis? BTW, how do the transcription and translation processes for those two proteins look like? Are they straightforward or convoluted through some splicing and stuff like that? Are there chaperones involved in the post-translational 3D folding? Where are those proteins delivered to? How does that delivery occur? How does the myosin pull the microtubule along an actin filament? What other factors are involved in that process? How many of each of those proteins should get produced for that particular process? Any known problems in the cases of deficit or excess of each of them?
We all understand that we could face gazillion questions like these related to many separate or interrelated issues in systems biology. This was just a relatively easy example, kind of a small sneak preview of the entire movie. :) Thank you in advance for any comments on this. curious lurker
Silver Asiatic
As I’m sure you’ve discovered, this is usually not just a friendly debate about trivial matters. The ID proposal, potentially, has a massive impact for society and at the personal level. It’s a clash of worldviews and it generates a lot of passion and, quite often, a lot of hostility on either side. With that, the agenda is not so subtle. For many, this battle has to end in a victory for their side. Perhaps it’s true, “all’s fair in love and war”. So, that might mean that it’s fair enough to simply ignore what ID advocates have said many times about their own area of study and insist that they’re dishonest if they don’t provide analysis on the identity of the designer.
Yes, sir! That's it! curious lurker
Silver Asiatic, Dionisio is the same commenter as curious lurker. This was a kind of 'silly' experiment I tried for fun, after being away from here for a couple of weeks. :) curious lurker
Hi Silver Asiatic: Thank you for answering my questions. curious lurker
Alan Fox #364
But God created the world
If so, then the world was designed by intelligence. If you want to take that as your starting point, then you'd be pursuing something much different than the ID proposal. Silver Asiatic
curious #382
Hasn’t it been stated on a number of occasions in different discussion threads that those questions (specially the first one) are not part of the core ID proposition?
Yes, exactly. The ID project has defined boundaries. It looks at empirical evidence.
Why do some commenters insist in ignoring what others have written so clearly so many times? Is there a subtle agenda behind their questioning? Can someone explain this?
Good questions. As I'm sure you've discovered, this is usually not just a friendly debate about trivial matters. The ID proposal, potentially, has a massive impact for society and at the personal level. It's a clash of worldviews and it generates a lot of passion and, quite often, a lot of hostility on either side. With that, the agenda is not so subtle. For many, this battle has to end in a victory for their side. Perhaps it's true, "all's fair in love and war". So, that might mean that it's fair enough to simply ignore what ID advocates have said many times about their own area of study and insist that they're dishonest if they don't provide analysis on the identity of the designer. Silver Asiatic
Hi Dionisio: I understand your suggestions. Will try to remember them next time. Thank you! In this case, the text I quoted was copied from the end of the post # 364 which appears to be written by Alan Fox. curious lurker
#382 curious lurker
Someone asked: “By whom? How? When?” Hasn’t it been stated on a number of occasions in different discussion threads that those questions (specially the first one) are not part of the core ID proposition? Why do some commenters insist in ignoring what others have written so clearly so many times? Is there a subtle agenda behind their questioning? Can someone explain this? Can we all be more serious on discussing real issues? These are not easy concepts we are discussing here. Can we try and approach these discussions with respect and seriousness, so we all can benefit from them? Thank you!
When you quote some text from another post, as you did when you wrote this: Someone asked: “By whom? How? When?” can you also indicate the exact post number you copied that text from? That would help some of us to see the quoted text within the context of the original post and the discussion it was associated with. Also, try to use some of the HTML tags and attributes available for posting your comments. It makes the text more readable. Thank you. Dionisio
Alan, Maybe you or someone could come up with a working definition of the ToE. Now that would really be something. phoodoo
Upright BiPed @ 383
ID proponent makes an observation.
glacier and sand not having a dFSCI is not an observation. It is a calculation. All I am asking is how do you calculate the dFSCI and find that it is zero or null. If dFSCI is just an subjective observation then I have no objection at all. Me_Think
Have a good day. Maybe you or someone could come up with a working definition of CSI after work. How's the Web presentation of your semiotic argument going, BTW? Alan Fox
ID proponent makes an observation. ID critic makes a objection. ID proponent asks what the objection has to do with the observation. ID critic responds "you tell me". - - - - - - - - :| I'm off to work... Upright BiPed
Someone asked: "By whom? How? When?" Hasn't it been stated on a number of occasions in different discussion threads that those questions (specially the first one) are not part of the core ID proposition? Why do some commenters insist in ignoring what others have written so clearly so many times? Is there a subtle agenda behind their questioning? Can someone explain this? Can we all be more serious on discussing real issues? These are not easy concepts we are discussing here. Can we try and approach these discussions with respect and seriousness, so we all can benefit from them? Thank you! curious lurker
Crick's biological information is operationally defined and CSI wrt biology = Crick's biological information. Joe
CSI is operationally defined, Alan. OTOH your position still has nothing otherwise you would lead by example and you don't. Joe
Oops. "Scientific" Alan Fox
@ UB You tell me. I have tried to be clear that CSI and variants are effectively sclient if it gibberish. It's my contention CSI cannot be quantified until it is operationally defined. Can you or anyone do that? Alan Fox
So you have no way of distinguishing sand from sand castle other than “looks designed to me”.
A pile of sand from a -- what? And what does this have to do with a pile of sand having no dFSCI? Upright BiPed
Alan Fox:
So you have no way of distinguishing sand from sand castle other than “looks designed to me”.
Umm, no. Again Alan with his usual baby-style of argumentation. Joe
Alan Fox:
Sorry if that seemed a little testy but the assertions that CSI AND variants are real and useful Concepts are a bit trying.
Only because you are an ignorant little baby. What does blind watchmaker evolution have to offer?
Saying some things clearly exhibit CSI without explaining which things and how this is established is not saying much.
Then it is a good thing that we say which things have CSI. Joe
UB So you have no way of distinguishing sand from sand castle other than "looks designed to me". Bogus! Alan Fox
Alan Fox:
et if Gode created the World, surely the sand on the beach must be part of the design?
Being part of the design does not mean it was designed. A broken fog light is still part of the car it is on.
By whom? How? When?
Alan, arguing like a baby does not help you. Joe
Enkidu:
No one ever said or implied ToE is right because it’s the scientific consensus.
What ToE? Could you please link to it or admit that it doesn't exist?
Just the opposite. ToE is the scientific consensus because the vast majority of scientists who study it are convinced by the huge amount of consilient evidence it has.
Only someone ignorant of the evidence could say that. Joe
GP tells them that a glacier or pile of sand has no dFSCI. One of them stands forward to condescend "I have no idea how you would calculate the dFSCI in a glacier or pile of sand" It's perfect. :| Upright BiPed
Gpuccio at 376 Sorry if that seemed a little testy but the assertions that CSI AND variants are real and useful Concepts are a bit trying. Saying some things clearly exhibit CSI without explaining which things and how this is established is not saying much. AF Alan Fox
Gpuccio at 376 Sorry if that seemed a little testy but the assertions that CSI AND variants are real and useful Concepts are a bit trying. Saying some things clearly exhibit CSI without explaining which things and how this is established is not saying much. Alan Fox
Being a lurker has certain advantages, one of which could be that one can see the debate from the sidelines and analyze the arguments without being pressured to comment on them soon. By posting a comment the lurker's status disappears. However, strong curiosity leads to break the lurker's silence and become a commenter. By now I have the perception that a few of the commenters in this blog are serious but others seem less interested in discussing anything. The heated arguments, loaded with 'name calling' and vulgarities are not interesting at all. At times the abundance of such senseless posts makes one to quit reading the whole thread. I have a few questions for gpuccio, who seems to be very educated, belongs in the very serious group and has demonstrated a very clear writing style. However, I don't expect gpuccio (or anybody else here) to answer the specific questions, but to tell if questions like these could be somehow associated with his definition of dFCSI? What makes myosin VIII to become available right when it's required for cytokinesis? Same question for actin. What genes are they associated with? What signals trigger those genes to express those proteins for the cytokinesis? BTW, how does the transcription and translation processes for those two proteins look like? Are they straightforward or convoluted through some splicing and stuff like that? Are there chaperones involved in the post-translational 3D folding? Where is it delivered to? How does that delivery occur? How does the myosin pull the microtubule along an actin filament? How many of each of those proteins should get produced for that particular process? Any known problems in the cases of deficit or excess? [note that the above questions were copied from another web site] Thank you! curious lurker
gpuccio@366
The objects listed as designed are those which clearly exhibit dFSCI. The objects listed as non designed are those which do not.
Do you really believe we can calculate the dFSCI ? I see no way to calculate the dFSCI of say, glacier or Sand. Could you explain how you calculated ? Me_Think
Alan: Frankly, I did not expect this kind of "arguments" from you. You should know better. I had given a procedure. I was asked to give a list, so I complied. The list is obviously obtained by the procedure. The objects listed as designed are those which clearly exhibit dFSCI. The objects listed as non designed are those which do not. That means that design cannot be inferred for them (it is not detectable). ID is about detectable design, not about design in general. You should know that very well. Moreover, you should know very well that a pattern is not enough to infer design. And what has God to do with all this? I hoped you were a more careful interlocutor. gpuccio
logically_speaking Twisting my words, shameful! The thing is Enkidu, if you think that conformity means being correct, then I would point out that most people who are living today and who have ever lived believed in a creator God and conformed to some religion. Therefore by your own reasoning they must be right, right No one ever said or implied ToE is right because it's the scientific consensus. Just the opposite. ToE is the scientific consensus because the vast majority of scientists who study it are convinced by the huge amount of consilient evidence it has. Enkidu
gpuccio writes:
Sand on a beach is not designed.
Yet if Gode created the World, surely the sand on the beach must be part of the design?
A Shakespeare sonnet is designed.
Funny way of putting it. Shakespeare composed and wrote sonnets. But God created the world and Shakespeare was part of the design so...
The pattern of the drops of rain is not designed.
But scientists are able to research patterns here. But God created the World so...
ATP synthase is designed.
By whom? How? When? But God created the World so...
A glacier is not designed.
But God created the World so...
Windows XP is designed.
By people. Some would claim not very intelligent ones, judging by how clunky it was. But God made people so...
A snowflake is not designed.
But snowflakes form into intricate designs under precise physical conditions. Who made those laws of nature? God of course!
Histone H3 is designed.
By whom? How? When? But God created the World so... Alan Fox
Enkidu, Twisting gpuccio's words, shameful! The thing is Enkidu, if you think that conformity means being correct, then I would point out that most people who are living today and who have ever lived believed in a creator God and conformed to some religion. Therefore by your own reasoning they must be right, right? logically_speaking
gpuccio There are few things in this world that I really despise. One of them is the appeal to conformism in thought. I see. You'd rather be unique and dead wrong rather than conform and be correct. That explains a lot of your silly assertions actually. Enkidu
Enkidu at #333: There are few things in this world that I really despise. One of them is the appeal to conformism in thought. gpuccio
Astroman at #335: "gpuccio, serious questions:" Serious answers. "What do you hope to accomplish by endlessly repeating your ID claims on one or a few obscure blogs?" I like intellectual confrontation about ideas which, for me, are important. "Why are you so afraid to submit your claims to Bio-Complexity, and to reputable scientific journals?" I am not afraid. I have no desire to do that. "What have you got to lose by trying?" I don't do things only because I have nothing to lose. I do things because I want to do them. "Why are you ID proponents so afraid to answer questions and actually demonstrate your allegedly accurate and reliable methods?" I answer for myself: I am not afraid at all. I answer all intelligent questions and always try to demonstrate my accurate and reliable methods. "Why are you ID proponents so afraid of naming a variety of designed vs. non-designed things in nature and demonstrating how you can scientifically determine the difference?" I answer for myself. I am not afraid at all. I believed that to give the procedure was a better answer. If you like a list, have a list. Sand on a beach is not designed. A Shakespeare sonnet is designed. The pattern of the drops of rain is not designed. ATP synthase is designed. A glacier is not designed. Windows XP is designed. A snowflake is not designed. Histone H3 is designed. Shall I go on? "Why are you ID proponents so dishonest about your religious and political motives and agenda?" I answer for myself. I have no religious or political agenda at all. I have never used a religious or political argument here to defend ID. Indeed, I don't like others doing that. I never speak of my religious, least of all political, ideas here. Very rarely, I have taken part in some philosophical or even religious debates here, without mixing them with the scientific aspects of ID. But even that has happened very rarely. One of my strongest beliefs is that science must not be conditioned by religious or philosophical ideas, as far as it is humanly possible (I also believe that cognitive bias can never be completely eliminated in any human cognitive activity). "Millions of hard working people are out there doing science and adding to useful knowledge everyday" And I respect them very much for that. I am a scientist too, although my field of activity is slightly different. "while you and the other ID proponents are bashing science and scientists" I answer for myself. I consider science one of the best human activities. I deeply despise scientism, which is only a bad philosophy, and a disgrace for true science. "and pushing your religious and political Wedge agenda." Is debating the ideas one believes in, in a public blog, an agenda? Well, I have always thought that it would be exciting to be a conspirator of some kind. Maybe I have succeeded without realizing it! :) "What are any of you contributing to the world in a positive way?" I answer for myself. What I can. But, certainly, I don't answer to you for that. gpuccio
k said: "KS, do the words policy + advisor mean anything to you?" k, according to your claims you advised the previous Montserrat government(s) on policy matters yet you claim above that a bunch of things need to be fixed because of "long term consequences of policy errors". Sounds like the government of Montserrat and the people who live there would be a lot better off without your poor advice. Astroman
Astroman
Silver Asiatic, in addition to a lot of other things you should learn that using the term “Darwinism” is a revealing demonstration of your lack of knowledge regarding modern Evolutionary Theory.
It's good that I have the opportunity to be taught by believers in modern Evolutionary Theory here every day. We had a lengthy discussion on the term "Darwinism" last month or so - it went on for about a week with 500 posts or so. Of course, some evolutionists disagree with your point of view but that's the way it works. There's no official, authoritative source for what "modern Evolutionary Theory" is. There's no official source that says "Darwinism" is an incorrect term. You may have a lot of knowledge in that area. If you'd like to post your credentials, I'd appreciate learning about your expertise and where you currently work. If not, you're an anonymous guy on this blog making claims about evolution and I have no reason to accept what you have to say - especially since I can cite scientists who know a lot more about evolutionary biology than you do (at least until you prove other wise) who disagree with what you have to say. So, the problem is in the evolutionary community itself. I can see that quite clearly every day. I'm open to whatever you have to say but what you said thus far was not convincing at all. But I appreciate hearing your opinions on this. Silver Asiatic
LoL! @ astroboy. That is an all-time classic cowardly comeback. Good luck on your search for this alleged evolutionary theory. Joe
Joe, assuming or hoping that certain people are bothered by whatever it is that you assume or hope they are bothered by is a really lame way to spend your life. Astroman
Astroman, please stop talking about a modern evolutionary theory seeing that you cannot find it. Darwin is still the only one to come close to positing a theory of evolution. Joe
astroman:
those are some hilarious and evasive examples you chose to represent “a variety of designed vs. non-designed things in nature”.
What? Those were valid examples.
I’m not asking about things that are done or made by humans and you know it.
Why not? It counts. My point is not everything has to be designed just because the universe was. Accidents would still happen. We would need something more to go on- for example "The Privileged Planet" has the Earth and its place being designed because sheer dumb luck doesn't cut it and there are many factors at play. However it doesn't say that the moons of Mars had to be designed. Those could have been captured without ID. The universe has its laws. Our solar system and our place in it has its evidences. But accidents still would happen. That is where the EF comes in. Joe
Silver Asiatic, in addition to a lot of other things you should learn that using the term "Darwinism" is a revealing demonstration of your lack of knowledge regarding modern Evolutionary Theory. Astroman
Rich:
Oh the joy of knowing what Joe F will or wont do before he does it.
I was going by what he already did, cupcake. What is your problem Richie pom-poms the cheering section? Joe
astroman:
Millions of hard working people are out there doing science and adding to useful knowledge everyday
And we like that because the more they find out the better ID looks. Does that bother you? Joe
Oh the joy of knowing what Joe F will or wont do before he does it. Designer be praised! http://cdn.meme.am/instances/500x/55904963.jpg Rich
I asked: "Why are you ID proponents so afraid of naming a variety of designed vs. non-designed things in nature and demonstrating how you can scientifically determine the difference?" Joe responded: "We have. Do you think that all deaths are murders or do you think we can tell the difference? Are all fires arson or can we tell the difference? Are all rocks artifacts or can we tell the difference?" LOL, those are some hilarious and evasive examples you chose to represent "a variety of designed vs. non-designed things in nature". I'm not asking about things that are done or made by humans and you know it. And before you go off on another cowardly detour, I'm also not asking about things like bird nests, beaver dams, etc. Astroman
LoL! @ Rich:
Its entirely plausible that two threads had commentary disabled because the “home team” was doing too well.
The game was over. All you team had left was to ignore and rant. We don't need that. You lost. Deal with it.
And that Andre wont talk to Joe F. about PCD at TSZ because of rude words.
Joe F won't even deal with the issue.
And that CSI / FIASCO is real despite your inability amongst yourselves to describe it / calculate it / determine if nature can create it / use it to detect design.
We have done that, Rich. Your willful ignorance is not an argument nor a refutation. Your trope might work on evo forums but it doesn't wash here. Joe
Phoodoo #328
The whole thing is a mess, and that is why no one makes any attempt to quantize or create a dialog of how it could happen. No one attempts to say how it could happen in detail.
That's the way i see it also - an enormous mess. Victory is claimed after a minor, highly debatable point possibly reaches a stalemate, forgetting that scientists are walking away from Darwinism. No distinction is made with regards to the many, often conflicting proposed mechanisms. It's never admitted that even among the Darwinian faithful there are selectionists and those that favor genetic drift as primary (or even sole) driver of change. Internal debates in the biological community are covered-over with the claim "that's how science works". Every week there are "new findings" that "overturn previously held views" and these are downplayed as "nothing new". Claims of empirically validated evolutionary sequences are based on hypothetical constructs and on the premise of common descent (tautology). In the end, no detail is given to support the grand claims because "only creationists demand such detail" - obviously, evolutionists are satisfied with the barest, even contradictory, evidence. Convergent evolution fills numerous gaps with the claim that "evolution finds the same solutions in different species". Sure - the same wildly improbable solutions that is. So, yes -- it is a mess and nobody wants to have an honest dialogue about it. The more I read the passionate and ill-supported claims from evolutionists here (they have every opportunity to be convincing) the more absurdly wrong it appears. I find that on a daily basis here. Silver Asiatic
Yes Joe. Have a cookie. Its entirely plausible that two threads had commentary disabled because the "home team" was doing too well. And that Andre wont talk to Joe F. about PCD at TSZ because of rude words. And that CSI / FIASCO is real despite your inability amongst yourselves to describe it / calculate it / determine if nature can create it / use it to detect design. Rich
And more Richie pom-poms the cheering section. Rich, keith s has been refuted. Not even your cowardly and ignorant belligerence can save him. Joe
If keith s could model unguided evolution producing objective nested hierarchies we would have something to discuss. But hey, he won't even deal with the fact that Darwin refuted his claims back in 1859. Sad, really. Joe
Ah, bless Joe: "Yes, YOU are. The comments are closed because YOU refuse to deal with the thorough refutations of your trope. And there isn’t anything left to discuss." When Iraqi need a new information minister, throw your hat in the ring. Rich
Enkidu:
It must be nice to have the confidence millions of working scientists are wrong and you are right.
What a bluffing choke that was. What are those alleged millions of working scientists right about, Enkidu? Do they have a model of unguided evolution producing something? Joe
Reformat-- keiths:
When readers see “Comments are closed” at the end of your screeds, they know you are fearful of criticism and open discussion.
Yes, YOU are. The comments are closed because YOU refuse to deal with the thorough refutations of your trope. And there isn't anything left to discuss. Joe
astroman:
Why are you ID proponents so afraid to answer questions and actually demonstrate your allegedly accurate and reliable methods?
We have. OTOH your position doesn't have anything to compare them to.
Why are you ID proponents so afraid of naming a variety of designed vs. non-designed things in nature and demonstrating how you can scientifically determine the difference?
We have. Do you think that all deaths are murders or do you think we can tell the difference? Are all fires arson or can we tell the difference? Are all rocks artifacts or can we tell the difference? And AGAIN, if the materialistic position had answers or a methodology, this discussion would either not be taking place or it would be very different. Joe
keith s:
When readers see “Comments are closed” at the end of your screeds, they know you are fearful of criticism and open discussion. Yes, YOU are. The comments are closed because YOU refuse to deal with the thorough refutations of your trope. And there isn't anything left to discuss.
Joe
KF, A word of advice. When readers see "Comments are closed" at the end of your screeds, they know you are fearful of criticism and open discussion. If you have no confidence in your ideas, why should anyone else? keith s
Let me post this here again. Linking to posts with comments turned off is cowardly. It suggests wanting to keep a "clean copy" without any discussion, so they it can continually be referred to it as if it is beyond criticisms. Shameful. Censorship, you're either for it or you ain't. Rich
Have any of my posts been deleted from this thread? Rich
gpuccio, serious questions: What do you hope to accomplish by endlessly repeating your ID claims on one or a few obscure blogs? Why are you so afraid to submit your claims to Bio-Complexity, and to reputable scientific journals? What have you got to lose by trying? Why are you ID proponents so afraid to answer questions and actually demonstrate your allegedly accurate and reliable methods? Why are you ID proponents so afraid of naming a variety of designed vs. non-designed things in nature and demonstrating how you can scientifically determine the difference? Why are you ID proponents so dishonest about your religious and political motives and agenda? Millions of hard working people are out there doing science and adding to useful knowledge everyday while you and the other ID proponents are bashing science and scientists, and pushing your religious and political Wedge agenda. What are any of you contributing to the world in a positive way? Astroman
k said: "F/N Self-serving advertisement of another of my cowardly Comments Off screeds: I have responded on record blathered on and on for the millionth time in my usual nauseating way regarding Islands of function, FSCO/I and related issues my pseudo-scientific nonsense, as well as abusive comments further demonstrating my complete lack of integrity by drumbeat repeating my abusive, sanctimonious, slanderous, false accusations toward anyone, and some opponents in particular, who oppose my self-righteous commandments here. KF" FIFY P.S. k, get over yourself, and grow a pair. Astroman
gpuccio I am not upset at all. I just point to your blatant errors. It must be nice to have the confidence millions of working scientists are wrong and you are right. Especially when the courage to have your ideas vetted by those scientists seems nonexistent. Enkidu
Keith, Astroman, Enkidu: Thank you for you kind interest in my scientific career, possible contributions to ID, and personal motivations. It's beautiful that so many friends care for me. But don't worry too much. I am very happy as I am, doing the things that I do. You seem to believe that my brilliant perspectives are limited by fear. I respect your opinion. After all, not all people can be heroes like you. gpuccio
Enkidu: I am not upset at all. I just point to your blatant errors. gpuccio
Astroman: We don't know who the biological designer(s) is. gpuccio
Astroman, relying on imagination, and denial of reality, is the stock and trade of neo-Darwinian science. EVOLUTIONARY JUST-SO STORIES Excerpt: ,,,The term “just-so story” was popularized by Rudyard Kipling’s 1902 book by that title which contained fictional stories for children. Kipling says the camel got his hump as a punishment for refusing to work, the leopard’s spots were painted on him by an Ethiopian, and the kangaroo got its powerful hind legs after being chased all day by a dingo. Kipling’s just-so stories are as scientific as the Darwinian accounts of how the amoeba became a man. Lacking real scientific evidence for their theory, evolutionists have used the just-so story to great effect. Backed by impressive scientific credentials, the Darwinian just-so story has the aura of respectability. Biologist Michael Behe observes: “Some evolutionary biologists--like Richard Dawkins--have fertile imaginations. Given a starting point, they almost always can spin a story to get to any biological structure you wish” (Darwin’s Black Box).,,, http://www.wayoflife.org/database/evolutionary_just_so_stories.html "Grand Darwinian claims rest on undisciplined imagination" Dr. Michael Behe - 29:24 mark of following video http://www.youtube.com/watch?feature=player_detailpage&v=s6XAXjiyRfM#t=1762s a prime example of undisciplined imagination instead of empirical science, is the bacterial flagellum: Calling Nick Matzke's literature bluff on molecular machines - DonaldM UD blogger - April 2013 Excerpt: So now, 10 years later in 2006 Matzke and Pallen come along with this review article. The interesting thing about this article is that, despite all the hand waving claims about all these dozens if not hundreds of peer reviewed research studies showing how evolution built a flagellum, Matzke and Pallen didn’t have a single such reference in their bibliography. Nor did they reference any such study in the article. Rather, the article went into great lengths to explain how a researcher might go about conducting a study to show how evolution could have produced the system. Well, if all those articles and studies were already there, why not just point them all out? In shorty, the entire article was a tacit admission that Behe had been right all along. Fast forward to now and Andre’s question directed to Matzke. We’re now some 17 years after Behe’s book came out where he made that famous claim. And, no surprise, there still is not a single peer reviewed research study that provides the Darwinian explanation for a bacterial flagellum (or any of the other irreducibly complex biological systems Behe mentioned in the book). We’re almost 7 years after the Matzke & Pallen article. So where are all these research studies? There’s been ample time for someone to do something in this regard. Matzke will not answer the question because there is no answer he can give…no peer reviewed research study he can reference, other than the usual literature bluffing he’s done in the past. https://uncommondesc.wpengine.com/irreducible-complexity/andre-asks-an-excellent-question-regarding-dna-as-a-part-of-an-in-cell-irreducibly-complex-communication-system/#comment-453291 More Irreducible Complexity Is Found in Flagellar Assembly - September 24, 2013 Concluding Statement: Eleven years is a lot of time to refute the claims about flagellar assembly made in Unlocking the Mystery of Life, if they were vulnerable to falsification. Instead, higher resolution studies confirm them. Not only that, research into the precision assembly of flagella is provoking more investigation of the assembly of other molecular machines. It's a measure of the robustness of a scientific theory when increasing data strengthen its tenets over time and motivate further research. Irreducible complexity lives! - http://www.evolutionnews.org/2013/09/more_irreducibl077051.html bornagain77
Box, I think both problems are equally problematic for unguided evolution. And that is before we even stop to consider the necessity of all of evolutions changes to an organism to occur sequentially. How can an optic nerve only form AFTER we have a fluid filled focusing mechanism, and not before. The whole thing is a mess, and that is why no one makes any attempt to quantize or create a dialog of how it could happen. No one attempts to say how it could happen in detail. phoodoo
F/N: I have responded on record regarding Islands of function, FSCO/I and related issues as well as abusive comments here. KF kairosfocus
Phoodoo,
Phoodoo #325: There is no way, if life was nothing but an arms race, that it would always stay so balanced to allow so much diversity. Eventually one side would win the whole prize.
You argue that the balance between the diverse life forms cannot be explained / maintained by unguided evolution. In post #286, I argue that the balance in an organism cannot be maintained by unguided evolution.
If one supposes that an organism is just a collection of chemical processes, one must assume a delicate balance between those chemical processes. The introduction of a novel protein – without a fitting regulatory system already in place – can only be detrimental to an organism.
We are both making holistic arguments. We are just pointing towards different 'wholes'. Your argument is wrt life as a whole, my argument is wrt the organism as a whole. Box
bornagain, Those are some great points. To me that has always been one of the biggest evidences that made me realize evolution couldn't possibly be correct. If we are talking about an unguided process, that simply weights itself towards the organism that finds the best way to copy itself successfully, then quite obviously how could a mammal every compete with a rapidly multiplying bacteria, or for that matter, an all consuming toxic sludge that keeps doubling in size and consumes any carbon it covers. There is no way, if life was nothing but an arms race, that it would always stay so balanced to allow so much diversity. Eventually one side would win the whole prize. phoodoo
astroman:
Your denial of reality, and lack of self-awareness are a sight blight to behold.
Exactly what we say of you. Thanks. Joe
OK so still no theory of evolution nor any research towards the blind watchmaker thesis. Got it Joe
bornagain77 said: "Moreover, Behe has been vindicated in spades,,, Care to refute his work with actual evidence or are you content to sling mud and call it a day?" Your denial of reality, and lack of self-awareness are a sight blight to behold. Astroman
bornagain77, to say that your projection is extreme would be putting it mildly. And something you really should keep in mind when you say things like "Ad hominem does not empirical refutation make!" is the massive amount of ad hominems that have been and/or are spewed by you, kairosfocus, Joe, Dembski (remember the fart video?), phoodoo, and virtually all other ID-Creation proponents. Astroman
phoodoo, to say that your projection is extreme would be putting it mildly. Astroman
Sometimes materialists, instead of conceding the fact that pathogens do not present any evidence for 'vertical' Darwinian evolution, will complain that a 'loving' God would not make pathogens. Yet there is very good reason to believe that pathogens were originally created 'non-pathogenic' and became pathogenic through 'downhill' evolutionary processes: Setting a Molecular Clock for Malaria Parasites - July 8, 2010 Excerpt: The ancestors of humans acquired the parasite 2.5 million years ago. "Malaria parasites undoubtedly were relatively benign for most of that history (in humans), becoming a major disease only after the origins of agriculture and dense human populations," said Ricklefs. http://www.nsf.gov/news/news_summ.jsp?cntn_id=117259 "the AIDS virus originated relatively recently, as a mutation from SIV, the simian immuno-deficiency virus. According to Wikipedia, this virus was also benign in its original form:.. Unlike HIV-1 and HIV-2 infections in humans, SIV infections in their natural hosts appear in many cases to be non-pathogenic. Extensive studies in sooty mangabeys have established that SIVsmm infection does not cause any disease in these animals, despite high levels of circulating virus." https://uncommondesc.wpengine.com/intelligent-design/macroevolution-microevolution-and-chemistry-the-devil-is-in-the-details/#comment-448372 Forcing bacteria to 'evolve' turns helpful bacteria into pathogenic bacteria: From friend to foe: How benign bacteria evolve to virulent pathogens, December 12, 2013 Excerpt: "Bacteria can evolve rapidly to adapt to environmental change. When the "environment" is the immune response of an infected host, this evolution can turn harmless bacteria into life-threatening pathogens. ...It is thought that many strains of E. coli that cause disease in humans evolved from commensal strains." http://medicalxpress.com/news/2013-12-friend-foe-benign-bacteria-evolve.html Genetic study shows that bubonic plague (Black Death) was caused by loss of genes and streamlining (genetic entropy) of a non-pathogenic bacteria: The independent evolution of harmful organisms from one bacterial family - April 21, 2014 Excerpt: "Before this study, there was uncertainty about what path these species took to become pathogenic: had they split from a shared common pathogenic ancestor? Or had they evolved independently" says Professor Nicholas Thomson, senior author from the Wellcome Trust Sanger Institute. "What we found were signatures in their genomes which plot the evolutionary path they took. For the first time, researchers have studied the Black Death bacterium's entire family tree to fully understand how some of the family members evolve to become harmful.,,, The Yersinia family of bacteria has many sub species, some of which are harmful and others not. Two of the most feared members of this bacterial family are Yersinia pestis, the bacterium responsible for the bubonic plague or the Black Death, and Yersinia enterocolitica, a major cause of gastroenteritis. Previous studies of this family of bacteria have focused on the harmful or pathogenic species, fragmenting our full understanding of the evolution of these species.... "Surprisingly they emerged as human pathogens independently from a background of non-pathogenic close relatives. These genetic signatures mark foothold moments of the emergence of these infamous disease-causing bacteria." The team found that it was not only the acquisition of genes that has proven important to this family of bacteria, but also the loss of genes and the streamlining of metabolic pathways seems to be an important trait for the pathogenic species. By examining the whole genomes of both the pathogenic and non-pathogenic species, they were able to determine that many of the metabolic functions, lost by the pathogenic species, were ancestral. These functions were probably important for growth in a range of niches, and have been lost rather than gained in specific family lines in the Yersinia family. "We commonly think bacteria must gain genes to allow them to become pathogens. However, we now know that the loss of genes and the streamlining of the pathogen's metabolic capabilities are key features in the evolution of these disease-causing bacteria," http://phys.org/news/2014-04-plague-family-independent-evolution-bacterial.html We are living in a bacterial world, and it's impacting us more than previously thought - February 15, 2013 Excerpt: We often associate bacteria with disease-causing "germs" or pathogens, and bacteria are responsible for many diseases, such as tuberculosis, bubonic plague, and MRSA infections. But bacteria do many good things, too, and the recent research underlines the fact that animal life would not be the same without them.,,, I am,, convinced that the number of beneficial microbes, even very necessary microbes, is much, much greater than the number of pathogens." http://phys.org/news/2013-02-bacterial-world-impacting-previously-thought.html#ajTabs if evolution were actually the truth about how all life came to be on Earth then the only 'life' that would be around would be extremely small organisms with the highest replication rate, and with the most mutational firepower, since only they would be the fittest to survive in the dog eat dog world where blind pitiless evolution rules and only the 'fittest' are allowed to survive. The logic of this is nicely summed up here: Richard Dawkins interview with a 'Darwinian' physician goes off track - video Excerpt: "I am amazed, Richard, that what we call metazoans, multi-celled organisms, have actually been able to evolve, and the reason [for amazement] is that bacteria and viruses replicate so quickly -- a few hours sometimes, they can reproduce themselves -- that they can evolve very, very quickly. And we're stuck with twenty years at least between generations. How is it that we resist infection when they can evolve so quickly to find ways around our defenses?" http://www.evolutionnews.org/2012/07/video_to_dawkin062031.html i.e. Since successful reproduction is all that really matters on a neo-Darwinian view of things, how can anything but successful reproduction be realistically 'selected' for? Any other function besides reproduction, such as sight, hearing, thinking, etc.., would be highly superfluous to the primary criteria of successfully reproducing, and should, on a Darwinian view, be discarded as so much excess baggage since it would, sooner or later, slow down successful reproduction. But that is not what we find. Time after time We find organisms cooperating with each other in ways that have knothing to with their individual ‘fitness to reproduce’: NIH Human Microbiome Project defines normal bacterial makeup of the body – June 13, 2012 Excerpt: Microbes inhabit just about every part of the human body, living on the skin, in the gut, and up the nose. Sometimes they cause sickness, but most of the time, microorganisms live in harmony with their human hosts, providing vital functions essential for human survival. http://www.nih.gov/news/health/jun2012/nhgri-13.htm bornagain77
Enkidu, Is that more preposterous than believing that conscious, incredibly precise, incredibly irreducibly complex beings arose from accidental swirls of bad replicating mud? Mud for which you don't even have an explanation of where it came from, in a universe full of constant laws that you also don't know where it came from? I guess I have a little less suspension of disbelief than you do. Do you know what the purpose of the world is? Do you know what tools a creator would or wouldn't have to work with? Do you believe that the world should be only good, without the possibility of anything bad? How do you know that was ever an option? What would a world that had only good, and nothing bad be like? Would you need to spend any effort in energy in such a world? Why? phoodoo
Phoodoo, Behe claims the malaria that kills approx. 600,000 people every year was intentionally designed by God.
"Here’s something to ponder long and hard: Malaria was intentionally designed. The molecular machinery with which the parasite invades red blood cells is an exquisitely purposeful arrangement of parts. C-Eve’s children died in her arms partly because an intelligent agent deliberately made malaria, or at least something very similar to it. - Michael Behe, The Edge Of Evolution, p. 237
He also claims that God comes back every few years and give the malaria parasite a resistance to our new anti-malarial drugs, ostensibly so it can kill more people. Do you think he is correct? Enkidu
Enkidu@313 What in the world are you talking about? What is the mainstream media that Behe hasn't been publishing in? You mean like his works don't count, because he has been publishing them, in like science journals about biology? He has published over 35 articles in refereed science journals. His ID work has been panned by the scientific community? You mean like by Larry Moran?? I am sure your regurgitated spin goes over very well at talkorigins. Nature magazine knows there is a problem with the elusive "Theory of Evolution". You exist in the lunatic fringe worlds of Larry Moran, and PZ Meyers. phoodoo
Ad hominem does not empirical refutation make! Moreover, Behe has been vindicated in spades,,, Care to refute his work with actual evidence or are you content to sling mud and call it a day? bornagain77
Correction. Behe has published exactly 2 papers in the mainstream scientific literature in the last 13 years. I forgot the 2004 stinker with fellow Creationist David Snoke. Enkidu
phoodoo "The guy is incapable of understanding even one thing Michael Behe has written. Does Moran actually publish any work at all?" Behe has published a grand total of 1 paper in the mainstream scientific literature in the last 13 years. Virtually of his writing is done as popular press articles and books pushing his ID beliefs. His ID work has been widely panned by the scientific community for its lack of rigor and unjustified conclusions. Enkidu
Actually, Phoodoo, as you can't find / haven't looked for Moran's work, you've told us all we need to know.. about you. I enjoyed the projection, though! ;) http://biochemistry.utoronto.ca/moran/bch.html Rich
You know Astroman, just going to the cesspool that is Larry Moran's blog is a lesson in seeing the desperation of the science community just saying anything that they can possibly think of to sound convincing that their theory is not in danger. Its practically a parody of obfuscation, misdirection, words taken out of context, blind apologetics, and a circus of evangelists whose only purpose in life is to deny any and every evidence which might cause them to be forced to concede they got something wrong. It is as far from being a discussion about science as a Justin Bieber song is about art. I don't see how anyone, with even a hint of knowledge can spend more than two minutes reading there, without wanting to say, Geez Moran, what a disingenuous fricking liar you are. And you are a university Professor? The guy is incapable of understanding even one thing Michael Behe has written. Does Moran actually publish any work at all? phoodoo
Astroman, Please go to Businessweek and read about how GM silenced whistleblowers about flawed ignition switches, and go to USAToday and read how GMs CEO denied any hint of conspiracy in covering up knowledge of the faulty switches, and read about a thousand other articles about GM denying their cars had any problems, and then read about them finally admitting their cars had safety problems. Then maybe I will get around to reading the propagandist Larry Moran (whose very livelihood depends on evolution theory being credible) write about how the Nature article was wrong ( even though it was written and supported by a whole team of evolutionary biologists). You must truly have an open mind about the subject...hahaha phoodoo
Keith S said to gpuccio: "Let’s suppose that the evil Darwinist cabal would quash any paper you submitted to a reputable scientific journal. What baffles me is why you don’t submit one to the ID-friendly journal BIO-Complexity. They desperately need more papers, and here you have a reliable design-detection procedure that will revolutionize science if it is as robust as you say. Don’t you have confidence that the ID-friendly reviewers will see the value of your contribution? The ID community is desperately in need of some good news. Why are you depriving them of it?" Keith, I'm baffled too, sort of. I would think that ID proponents who are confident that they have important ID procedures, evidence, and/or hypotheses to put forth would be eagerly submitting papers to paper starved Bio-Complexity (and reputable journals too) instead of just endlessly repeating their claims here or on their own blog, but I also realize that they're fearful of scrutiny, criticism, and rejection, including by ID promoting reviewers at Bio-Complexity. It takes courage to have a nothing ventured nothing gained attitude, and it's safer to have a nothing ventured nothing lost attitude. Astroman
Endiku:
Crick’s definition of information says nothing about any CSI or dFSCI.
It refers to biological function. And guess what? So does dFSCI. Here, see if you can follow:
Information means here the precise determination of sequence, either of bases in the nucleic acid or on amino acid residues in the protein.- Crick
The precise determination of sequence = the specification. Each protein consists of a specific sequence of amino acid residues which is encoded by a specific sequence of processed mRNA. Each mRNA is encoded by a specific sequence of DNA. The point being is biological information refers to the macromolecules that are involved in some process, be that transcription, editing, splicing, translation and functioning proteins. OK, moving on:
Biological specification always refers to function. An organism is a functional system comprising many functional subsystems. In virtue of their function, these systems embody patterns that are objectively given and can be identified independently of the systems that embody them. Hence these systems are specified in the same sense required by the complexity-specification criterion (see sections 1.3 and 2.5). The specification of organisms can be crashed out in any number of ways. Arno Wouters cashes it out globally in terms of the viability of whole organisms. Michael Behe cashes it out in terms of minimal function of biochemical systems.- Wm. Dembski page 148 of NFL
Deny all you want. ID has it laid out in black and white. from Kirk K. Durston, David K. Y. Chiu, David L. Abel, Jack T. Trevors, Measuring the functional sequence complexity of proteins, Theoretical Biology and Medical Modelling, Vol. 4:47 (2007):
[N]either RSC [Random Sequence Complexity] nor OSC [Ordered Sequence Complexity], or any combination of the two, is sufficient to describe the functional complexity observed in living organisms, for neither includes the additional dimension of functionality, which is essential for life. FSC [Functional Sequence Complexity] includes the dimension of functionality. Szostak argued that neither Shannon’s original measure of uncertainty nor the measure of algorithmic complexity are sufficient. Shannon's classical information theory does not consider the meaning, or function, of a message. Algorithmic complexity fails to account for the observation that “different molecular structures may be functionally equivalent.” For this reason, Szostak suggested that a new measure of information—functional information—is required.
This part is interesting:
 First, as observed in Table ?Table1,1, although we might expect larger proteins to have a higher FSC, that is not always the case. For example, 342-residue SecY has a FSC of 688 Fits, but the smaller 240-residue RecA actually has a larger FSC of 832 Fits. The Fit density (Fits/amino acid) is, therefore, lower in SecY than in RecA. This indicates that RecA is likely more functionally complex than SecY.  (results and discussion section) 
So yes, Crick's definition is exactly the type of biological information ID is talking about. And by standard measures of complexity it fits that bill. And by ID's specification it fits that. Biological information is both complex and specified. What else do you want? Joe
gpuccio So, why restrict the concept of design to humans? Only because you believe it helps your false arguments. Why are you upset that I used your example template and your definitions to 'prove' something that's demonstrably false? BTW I'm not the guy making the dFSCI claim that apparently you're the only person in the world championing. If you can't even convince your fellow IDers how are you going to convince real scientists? As faulty as you reasoning is I can't say I blame you for avoiding any professional critical examinations of your dFSCI claims though. I'd be embarrassed to have my name associated with something so laughably bad too. Enkidu
gpuccio, is "The Intelligent Designer" of this universe an alien? If so, which other universe is it from? Astroman
And phoodoo, if you read up on ENCODE's little to no junk DNA claim you'll find that Nature magazine had no problem with publishing and hyping that claim. That claim goes against what some scientists think is an important prediction of Evolutionary Theory. So much for your ridiculous assertion that Nature magazine would do everything in its power to refrain from questioning the theory of evolution. Astroman
gpuccio,
Detecting design by dFSCI is a scientific procedure.
Let's suppose that the evil Darwinist cabal would quash any paper you submitted to a reputable scientific journal. What baffles me is why you don't submit one to the ID-friendly journal BIO-Complexity. They desperately need more papers, and here you have a reliable design-detection procedure that will revolutionize science if it is as robust as you say. Don't you have confidence that the ID-friendly reviewers will see the value of your contribution? The ID community is desperately in need of some good news. Why are you depriving them of it? keith s
Enkidu: Design is a process observed in humans. However, the process itself requires only the following: a) A conscious being with subjective representations and purposes. b) The observation that the specific functional form in an object is outputted to the object by the conscious being, from a conscious representation which precedes the output. That's what design is. Now, is it so difficult for you to imagine that intelligent and conscious aliens may exist? A lot of people believes that such a possibility is perfectly reasonable. So, would a conscious intelligent alien be capable of design? Yes. We could well have independent evidence that the alien designs something functional. Still, he would not be a human. So, why restrict the concept of design to humans? Only because you believe it helps your false arguments. Gross tricks. You are really boring. gpuccio
phoodoo, go to http://sandwalk.blogspot.com/ and read the recently posted article and comments about Nature magazine, science writing, ENCODE, etc. Then search for and read the many other articles and comments there and on other sites that criticize the hyping of little to no junk DNA by ENCODE and Nature magazine, and many other questionable or bogus claims by scientists or science writers. Notice that scientists don't hesitate to criticize other scientists and science publications. In science there are no unquestionable authorities. Astroman
To the wider group: Care to tell us which version of ID you subscribe to? The highly specific” certain features of the universe and of living things are best explained by an intelligent cause”? Because even I agree with that providing icecream is a “certain feature of the universe”. So what is and isn’t designed? A list would be useful. Rich
In the interest of clarity.... "A codon in DNA is a representation that must be translated... " cheers Upright BiPed
Enk: Empirically observed evolutionary processes can produce DNA sequences that meet your definition of dFSCI. GP: No. They can’t. Enk: Yes, they can.
Enkidu, something you might want to keep in mind: DNA is a representation that must be translated to have a physical effect. Representations are physical entities; they have unique physical properties that identify them from any other object in the physical world. Furthermore, DNA belongs to a special class of representation that has a dimensional orientation. This also has observable physical properties; it exist independent of the minimum total potential energy state of the medium. This special class of representation has only been found to exist in one other example, that is in the translation of recorded language and mathematics. GPuccio's claim is absolutely true and can be substantiated by material observation. Your claim, however, merely assumes that Darwinian evolution originated the representational system that it (itself) requires in order to exists. That is incoherent. If A requires B to exist, then A cannot be the source of B. cheers... Upright BiPed
I think Nature Magazine would be less likely to admit there is anything wrong with the theory of evolution, then General Motors would be willing to admit there is any problem with the safety of their vehicles. At some point though, fighting becomes impossible. phoodoo
Here is what I think Astroman, I think that Nature magazine would do everything in its power to refrain from questioning the theory of evolution. I think they are one of the last publications to admit that there is any problem at all with the theory, unless they absolutely positively had no choice but to do so, because they would looked completely foolish denying it. I think it is run by people whose lives DEPEND on presenting an unquestioned view of evolution, (which they admit scares people in the business who rely on funding based on their paradigm of the theory). So for such a magazine, so entrenched in the evolutionists worldview, to even come close to admitting any problems in the theory is remarkable enough. For them to actually come out and ask if the WHOLE theory needs a rethink? Now that is saying something! phoodoo
phoodoo, does the Nature magazine article say that Evolutionary Theory should be replaced by gpuccio's claims about dFSCI or by any other ID claims? And do you really believe that scientists think that Nature magazine or any of the articles in it are the say all, be all, and end all of Evolutionary Theory? Astroman
The whole concept of evolution is being called into question Enkidu. I don't even need to emphasize this. The article does it for me! They wrote this article, EVEN THOUGH THEY REALIZE : Quote- "Perhaps haunted by the spectre of intelligent design, evolutionary biologists wish to show a united front to those hostile to science. Some might fear that they will receive less funding and recognition if outsiders — such as physiologists or developmental biologists — flood into their field." So they are going to do everything they can to avoid admitting any division amongst themselves. And yet its too undeniable to stop it. They actually have an agenda NOT TO ADMIT to any shortcomings in the theory, but they are forced to anyway. Your denying this is simply sticking your fingers in your ears, closing your eyes, and saying, "I don't see anything." So you are clearly NOT someone seeking the truth, you are an evangelist. phoodoo
phoodoo Enkidu, You can read, right? “The number of biologists calling for change in how evolution is conceptualized is growing rapidly.” Is Nature Magazine a propaganda shill for ID? There's nothing in the Nature article about ID either. Sorry again. Enkidu
"Living things do not evolve to fit into pre-existing environments, but co-construct and coevolve with their environments, in the process changing the structure of ecosystems." Whoa, that sounds like more than just questioning the relative importance of the known mechanisms there Enkidu. phoodoo
Enkidu, You can read, right? "The number of biologists calling for change in how evolution is conceptualized is growing rapidly." Is Nature Magazine a propaganda shill for ID? phoodoo
phoodoo Ok, then what is so serious to make them call the whole concept into question? The whole concept of evolution isn't being called into question. Merely the relative importance of the known mechanisms are being re-examined. Enkidu
Enkidu, Ok, then what is so serious to make them call the whole concept into question? phoodoo
phoodoo From Nature Magazine: “The number of biologists calling for change in how evolution is conceptualized is growing rapidly. Strong support comes from allied disciplines, particularly developmental biology, but also genomics, epigenetics, ecology and social science1, 2. We contend that evolutionary biology needs revision if it is to benefit fully from these other disciplines. The data supporting our position gets stronger every day.” Nature Magazine!! ??? Nothing in there about dFSCI. Sorry. Enkidu
gpuccio You can go on stating falsities. They remain falsities You can go on making your unsubstantiated claims about dFSCI. They remain unsubstantiated. I don't begrudge you your pet evolution-falsifying hypothesis. Lots of folks on the internet put up arguments for a flat Earth, or squaring the circle, or how UFOs kidnapped Elvis. But unfortunately until your claims can pass critical review from the appropriate scientific experts you'll remain a lone voice in the wilderness. Of course if you never submit these ideas you can keep living on the claim that mainstream science hasn't refuted you. Must seem like a victory. Enkidu
Enkidu, Then what is it about evolutionary biology that has made so many biologists question the prior paradigm of the theory? You want to point out what other problems have shaken their believe so much as to call into question the entire concept? phoodoo
Bornagain #77: The reason for this severe constraint on proteins transforming into different proteins of a new function is because of what is termed ‘context dependency’.
The articles referenced by Bornagain77 present several compelling reasons why the evolving from protein A to protein B is extremely improbable, however, if I'm not mistaken, the most compelling reason is not among the listed. Let's suppose that a novel and potentially beneficial protein has been found in "nature's library".
Andreas Wagner: For example, there is this interesting fish called the winter flounder, which lives close to the Arctic Circle, in very deep, cold waters—so cold that our body fluids would freeze solid. Yet this fish survives there. It turns out that its ancestors discovered a new class of antifreeze proteins that work a bit similar to the antifreeze in your car.
Like any other protein, the antifreeze protein needs to regulated by the organism. Obviously, too much of the protein will cause severe stress and a likely death. If the protein finds its way in the wrong places .. the same sad result. IOW a beneficial protein implies a regulatory system. If one supposes that an organism is just a collection of chemical processes, one must assume a delicate balance between those chemical processes. The introduction of a novel protein - without a fitting regulatory system already in place - can only be detrimental to an organism. Only the ancestors of Wagner's winter flounders who already have a complementary regulatory system in place would benefit from the introduction of an antifreeze protein. If the arrival of an antifreeze protein is highly unlikely, what to make of the fact that on top of that a fitting regulatory system has to be already in place? Box
From Nature Magazine: "The number of biologists calling for change in how evolution is conceptualized is growing rapidly. Strong support comes from allied disciplines, particularly developmental biology, but also genomics, epigenetics, ecology and social science1, 2. We contend that evolutionary biology needs revision if it is to benefit fully from these other disciplines. The data supporting our position gets stronger every day." Nature Magazine!! ??? phoodoo
gpuccio: "The reasoning you develop is essentially correct. There is only one mistake. You must drop “human” from the third phrase." Why? What independently confirmed instances of dFSCI do you have from non-human intelligences? Go ahead and say "DNA" and keep the circularity going. :) Enkidu
Enkidu: "Yes, they can. Unless you make the definition of dFSCI “a property that can only be achieved through a designing intelligence” in which case you’re circular again." No. In my definition of dFSCI there is no such thing. And there are no "empirically observed evolutionary processes" which can produce DNA sequences that meet my definition of dFSCI. You can go on stating falsities. They remain falsities. gpuccio
phoodoo "When you say statements like this, it appears that you really have kept up much with what is going on in evolutionary biology. Yes, I have kept up but I haven't seen a single thing on dFSCI. I'd be happy to read any and all published scientific papers that say dFSCI provide solid evidence for an Intelligent Designer of life. Can you please supply those references? Thanks! Enkidu
I heard Richard Dawkins not too long ago, when asked what revelations in science would really shake his worldview, respond that if epigenetics turned out to truly have the power to influence more than one or two generation of offspring, then he would have to rethink his whole view about life. He must have said this before reading much of the current scientific literature. Richard, I am looking forward to seeing the new you, welcome in. phoodoo
Enkidu: The reasoning you develop is essentially correct. There is only one mistake. You must drop "human" from the third phrase. It is design which is independently defined. Design is any process where the specific form in the object is purposefully outputted by a conscious intelligent agent from a conscious representation. What we are defining here is the origin of the information, not the nature of the conscious intelligent agent (except for being a conscious intelligent agent). Any object whose form comes from a conscious intelligent purposeful representation is designed. In the definition of design there is no logic restriction to humans. With that correction, the reasoning is fine. But not precise. It lacks a specific definition of "Powered flight". And it would refer only to a subset of designed machines (powered flight machines). In particular, it lacks an objective definition of what a machine is. None of those limits is present in my definition based on dFSCI. gpuccio
gpuccio Enkidu: “Empirically observed evolutionary processes can produce DNA sequences that meet your definition of dFSCI.” No. They can’t. Yes, they can. Unless you make the definition of dFSCI "a property that can only be achieved through a designing intelligence" in which case you're circular again. I must say it's interesting that your logic indicates humans designed powered flight into birds and bats. You should put that in the paper when you publish too. :) Enkidu
Enkidu, "Doesn’t the scientific community deserve to know how wrong it’s been all these years?" When you say statements like this, it appears that you really have kept up much with what is going on in evolutionary biology. When Nature magazine runs an article entitled : "Does evolutionary theory need a rethink?", Nature Magazine-the biggest Darwinan cheerleading publication there is!, clearly they are searching to find out what has been wrong all these years! I can't imagine there is an active biologist in the country, who has looked at the details, who does not have some doubt about the explanation for the mechanics of evolution (unless their name is Jerry Coyne, and they are just angry at God). Look around, the voices are growing. phoodoo
gpuccio: "It is in the form: Property X can be observed in a number of objects. Of those objects, only in some cases we can ascertain independently the origin of the object itself. In all those cases, the origin is Y. Y is not defined in reference to X (that would create the circularity). Our problem is to infer an origin for the other objects exhibiting X, and whose origin is not known independently. There is no object exhibiting X for which an origin different from Y is known independently. Therefore, we infer Y as a good explanation for the origin of Y. ( I assume that's a typo and you meant X) You say that's a valid argument? Let property X = powered, heavier than air flight. Let property Y = human design That gives us Powered flight can be observed in a number of objects (airplanes, birds, bats) Of those objects, only in some cases (airplanes) we can ascertain independently the origin of the object itself. In all those cases, the origin is human design. Human design is not defined in reference to powered flight (that would create the circularity). Our problem is to infer an origin for the other objects exhibiting powered flight (birds, bats), and whose origin is not known independently. There is no object exhibiting powered for which an origin different from human design is known independently. Therefore, we infer human design as a good explanation for the origin of powered flight in birds and bats. gpuccio. gun. foot. BANG! :) Enkidu
Enkidu: Just for the record, if your argument is that "Empirically observed evolutionary processes can produce DNA sequences that meet your definition of dFSCI" that would make my argument wrong, but not circular. You have to decide yourself. I can show that you are wrong about the circularity (very easy). Or I can show that you are wrong about "empirically observed evolutionary processes which can produce DNA sequences that meet my definition of dFSCI". It just requires a little more time. But you must decide yourself. Either my reasoning is circular (IOWs, it is a logical fallacy, and there is no empirical argument at call) or it is empirically wrong (it is logically correct, but it can be empirically falsified). Decide yourself. Which is which? gpuccio
Enkidu: "Empirically observed evolutionary processes can produce DNA sequences that meet your definition of dFSCI." No. They can't. gpuccio
gpuccio: "What is true is that the only independently known source of dFSCI is human design." That is a false statement. Empirically observed evolutionary processes can produce DNA sequences that meet your definition of dFSCI. Publication date? It's not fair to keep this magnificent idea all to yourself now is it? Doesn't the scientific community deserve to know how wrong it's been all these years? Enkidu
Enkidu: What is true is that the only independently known source of dFSCI is human design. This is my argument, and it is true. And it is not circular. It is in the form: Property X can be observed in a number of objects. Of those objects, only in some cases we can ascertain independently the origin of the object itself. In all those cases, the origin is Y. Y is not defined in reference to X (that would create the circularity). Our problem is to infer an origin for the other objects exhibiting X, and whose origin is not known independently. There is no object exhibiting X for which an origin different from Y is known independently. Therefore, we infer Y as a good explanation for the origin of Y. This reasoning is not circular. And what you say is wrong. gpuccio
gpuccio: "Indeed, in the form you gave it would be circular, because it is not true that “The only powered flying machines we know about are human designed” Just as it is not true that "the only source of dFSCI we know about is human intelligence". I'm glad you finally admit to the circularity of your argument. When's your publication date again and with which journals? I want to be sure to reserve my copies. Enkidu
Enkidu: To be in a form tha means something, your first example should really be: "“1.The only powered flying machines whose origin we know are human designed 2.Birds are powered flying machines 3.Therefore birds are designed.” Indeed, in the form you gave it would be circular, because it is not true that "The only powered flying machines we know about are human designed". I am sorry that I did not catch immediately your circularity. gpuccio
gpuccio: "I am amazed at the depth of your thinking." And I at yours. It takes someone really special to invent his own definitions and fatally flawed logic then claim they falsify one of the most important scientific theories of all time. So when will you be submitting this paradigm changing idea to any mainstream journals for publication? Enkidu
Joe: "No. By Crick’s definition of biological information the DNA in living organisms contains CSI/ dFSCI." Crick's definition of information says nothing about any CSI or dFSCI. Those are terms pushed only by the ID movement and used merely as rhetorical devices. They have zero applicability to anything in the real scientific world. Enkidu
Enkidu: I will not spend useful time to discuss with you. You don't even know what "circularity" means, and come here to discuss of circularity. None of the examples you present is circular. Let's take the first one: "1.The only powered flying machines we know about are human designed 2.Birds are powered flying machines 3.Therefore birds are designed." It may be true or not, acceptable or not, but it is not circular. To be circular, it should be something like the following: 1.A bird is a designed object which can fly. 2.Birds can fly. 3.Therefore, birds are designed objects which can fly. From Wikipedia: "Begging the question means "assuming the conclusion (of an argument)", a type of circular reasoning. This is an informal fallacy where the conclusion that one is attempting to prove is included in the initial premises of an argument, often in an indirect way that conceals this fact.[1]" And: "Circular reasoning (Latin: circulus in probando, "circle in proving"; also known as circular logic) is a logical fallacy in which the reasoner begins with what they are trying to end with.[1] The components of a circular argument are often logically valid because if the premises are true, the conclusion must be true. Circular reasoning is not a formal logical fallacy but a pragmatic defect in an argument whereby the premises are just as much in need of proof or evidence as the conclusion, and as a consequence the argument fails to persuade. Other ways to express this are that there is no reason to accept the premises unless one already believes the conclusion, or that the premises provide no independent ground or evidence for the conclusion.[2] Begging the question is closely related to circular reasoning, and in modern usage the two generally refer to the same thing.[3]" When do you plan on learning what words mean before criticizing others with them? It is also very telling that in your examples you had to use the word "machine", which even in common language implies design. Ah, and you also used "rotary propulsion motor". I am amazed at the depth of your thinking. gpuccio
Enkidu:
You looked at DNA and defined one of its properties to be your “dFSCI”.
No. By Crick's definition of biological information the DNA in living organisms contains CSI/ dFSCI. Joe
gpuccio at 235 said Sorry for your confusion. dFSCI is a term I use, but it is a subset of CSI. The point is that it restricts specification to functional specification, which I have defined with some detail, and to digital forms of information. Keith is only partially right. It is indeed a subset, and not a “slightly modified version”. And, obviously, there are no flaws in the concepts of CSI and dFSCI. Least of all any circularity (I have answered in detail keith’s “argument” here Hi gpuccio, thanks for the reply. I had a quick look at your link and I'm sorry to say the circularity of the argument sticks out like the proverbial sore thumb. You looked at DNA and defined one of its properties to be your "dFSCI". Seems like you merely added quite a bit of superfluous technical jargon to the vague CSI definition. You defined dFSCI to be only producible by a human intelligence. Then you use your definitions to demonstrate your assertions. Completely circular. You could do the same with any property you find in life, i.e. 1.The only powered flying machines we know about are human designed 2.Birds are powered flying machines 3.Therefore birds are designed. or 1.The only undersea deep diving machines we know about are human designed 2.Whales are undersea deep diving machines 3.Therefore whales are designed are the same argument as 1.The only cases of dFSCI we know about are human designed 2.DNA is full of dFSCI 3.Therefore DNA is designed. All you have done is try to add some "window dressing" formality to the already discarded Creationist argument 1.The only rotary propulsion motors we know about were human designed 2.Bacteria flagella are rotary propulsion motors 3.Therefore flagella are designed. Of course I could be wrong. When do you plan on submitting this work to any scientific or mathematics journals for review and publication? Enkidu
And yet it somehow refutes Durston, et al? Joe
Joe: True. But it is irrelevant also because it applies intelligent selection to random mutation, IOWs intentional protein engineering. gpuccio
Details- how did vision systems evolve? How many mutations did it take? How many genes were involved? How many generations did it take? Unguided evolution cannot answer anything even though it is alleged to have a step-by-step process for constructing biological systems. That is why evos are forced to ignorantly attack ID and IDists Joe
Joe, please go on in detail about “the starting point were well-diversified populations- more than what exists today.”
LoL! Nice to see that you can't even follow along. I said that because that is the only way unguided evolution can explain the diversity of life observed today. Man you are dull. Joe
astroman- please go on in detain about the alleged theory of evolution- where is it? Who wrote it? When? Why are you ignorant about the relationship between the blind watchmaker thesis and evolution? I will have more questions later. Joe
Joe, please go on in detail about "the starting point were well-diversified populations- more than what exists today." For example, where did those populations come from, and how long ago? What sort of organisms were they? Why were there more populations then than there are now? While those populations existed, did they evolve? Were those populations affected by diseases and/or deformities? If so, what type of diseases and/or deformities? I may have more questions later. Astroman
The Szostak paper is irrelevant because it only deals with small polypeptides- 80 amino acids- and it only deals with binding to ATP. The proteins did NOT evolve via unguided evolution. Joe
Astroman: You say: "gpuccio said to Me_Think: “Excerpts have been quoted here, and comments made. It is my privilege, and anyone else’s on this blog, to comment on those things. Reasonably, and with the necessary cautions.” But to me he said: “But, as I said, it is not fair to comment without knowing the details of the argument. In the meantime, let’s leave the propaganda speakers to their “work”.” You can’t seem to make up your mind. Apparently it’s okay for ID proponents to comment about the book even though they haven’t read it and don’t know its details, but it’s not okay for ID opponents, e.g. Keith S. gpuccio, have you read the book? Do you know its details? Were the proper precautions taken by News (Denyse O’Leary) and other ID proponents before posting their opinions based only on the book announcement and their insatiable desire to find flaws in Evolutionary Theory?" I don't know if you just want to play games. I am not interested. I have not read the book. I have read what you have read here: the quotes and the comments of those who have read part of the book. And I have commented on those things. With what I consider reasonable caution. That's all. gpuccio
Alan: " I think I should get the book so gpuucio does not need to hear “Szostak paper” again!" Be my guest! Anything will be better than the Szostak paper :) OK, I must admit that the Wagner book could take its place as the best false argument, but we will have to wait and see. I am refraining from commenting too much, I don't want to disturb your brief moment of happiness. You really need it! gpuccio
The evolutionary position is so weak they have to deny theirs is the blind watchmaker thesis. How they think that helps, I don't know. Joe
Less talky, more sciencey, evos! Natural selection has proven to be impotent. Genetic drift doesn't help you. You don't have a mechanism capable of producing the diversity of life unless the starting point were well-diversified populations- more than what exists today. Joe
astroman:
Denton describes himself as agnostic but is clearly a creationist of some flavor.
Cuz an anonymous butt sez so
Berlinski also describes himself as agnostic but is clearly a creationist of some flavor,
Anonymous sez so. Really astro- is that the best you can do? Joe
Lest we forget- Wagner's position is irrelevant if it cannot be scientifically tested. And it cannot be scientifically tested... Joe
Rich:
And after years of googling “evolution can’t”, Densye suddenly wished she understood more about evolution.
Nice projection as you don't understand anything about evolution and just make stuff up. Then you run away when you are exposed. Joe
Enkidu:
Convergent evolution is also a well understood phenomenon even at the molecular level.
No, it isn't. That is because it is only assumed to have happened. No one has observed and studied it. That means it cannot be well understood. Evos are so desperate that they just make stuff up. Joe
gpuccio, even if Keith S hasn't yet read the book it's obvious that he knows a lot more details about Wagner's position than you ID proponents do. Astroman
Enkidu:
All species on Earth are related.
By a Common Design. Joe
Enkidu:
Some IDers like to offer “The Privileged Planet” as their ID model.
That is the non-biological part of the model
That means the whole universe and everything in it is designed.
No, it doesn't. Just because roads and cars are designed does that mean there aren't any accidents? Joe
keith s:
One of the biggest flaws is the circularity involved in using CSI or dFSCI to prove that something could not have evolved.
LoL! We use CSI or dFSCI to INFER something was intelligently designed. We do that because A) there aren't any known cases of blind and undirected processes producing it and B) there are plenty of cases in which intelligent agencies have. Joe
gpuccio said to Me_Think: "Excerpts have been quoted here, and comments made. It is my privilege, and anyone else’s on this blog, to comment on those things. Reasonably, and with the necessary cautions." But to me he said: "But, as I said, it is not fair to comment without knowing the details of the argument. In the meantime, let’s leave the propaganda speakers to their “work”." You can't seem to make up your mind. Apparently it's okay for ID proponents to comment about the book even though they haven't read it and don't know its details, but it's not okay for ID opponents, e.g. Keith S. gpuccio, have you read the book? Do you know its details? Were the proper precautions taken by News (Denyse O'Leary) and other ID proponents before posting their opinions based only on the book announcement and their insatiable desire to find flaws in Evolutionary Theory? Astroman
Oops Sorry Giuseppe s/b Gpuccio Les chiens aboient, la caravane passe. Alan Fox
This thread is a spectacular own goal, Denyse. Thank you.
Indeed! @ Gpuccio It's not just Jack Szostak! And another way to see that functionality is not rare. I've been saying for years "one in a gadzillion" is a bogus default argument.
When I asked it to calculate D for hundreds of pairs of bacteria, I was surprised to see — though their highly diverse genomes should have warned me — that even closely related organisms had highly diverse metabolic texts. Thirteen different strains of E. coli differed in more than 20 percent of their enzymes. An average pair of microbes differed in more than half of them. I had also suspected that bacteria living in the same environment — the soil, for example, or the ocean — might encounter similar nutrients and thus have similar metabolic texts. Wrong. Their metabolic texts were just as diverse, with a D just as different as that from bacteria living in different environments.
Excellent! On the evidence of Keith's quotes, I think I should get the book so gpuucio does not need to hear "Szostak paper" again! :) Alan Fox
keith @ 249
Wagner’s research shows that these barriers are rare, and that a random walk via viable neighbors will get about 80% of the way through the vast, hyperdimensional space before it encounters a barrier. And keep in mind that these random walks are choosing one viable neighbor at a time. Evolution has no such restriction.
It is interesting but the search space gets exceptionally smaller only on higher dimension. He state The libraries of regulatory circuits, of course, aren’t three- or even four-dimensional. They occupy higher dimensions, where cubes become hypercubes and balls become hyperspheres. What I don't understand is how something (the 'libraries') can exists in higher dimension. I can envisage 3-dimensions only. How will a thing exist in higher dimension ? Me_Think
keith s: No, nothing quite similar. gpuccio
gpuccio:
“You are whistling past the graveyard.” An interesting metaphor. Thank you.
It's a fairly common idiom in English. Is there an Italian equivalent? keith s
Speaking of metabolic paths is not the same as speaking of proteins.
Pretty much every thing he talks about is stale, except Chapter VI (at least that is not stale for me).He does talk about metabolic pathways BUT the book's main thrust (starts in Chapter VI ) is the ease with which innovation can be discovered by traversing neutral genotype networks. He gives example of Hammerhead ribozyme. There are 46 new shapes that can be discovered by evolution BUT with genotype network (made mainly of neutral genes mutations), the shapes which can discovered is 10^19. He also discusses the proximity of genotype networks on various dimensions and calculates that on the dimension where the regulatory 'library' exists the space that is required to be searched for new phenotype is just 10^-100 of search volume (which he calls the 'library'). Me_Think
Me_Think, The book undermines ID, but not Darwin. I've already skimmed much of the book, and it's clear that Wagner sees mutation as random, just as he says in the interview. His point is not that mutation is nonrandom. It's that random mutation can find a lot of equivalent solutions to metabolic problems due to the connected nature of the space it is navigating. This is exactly what KF and gpuccio dread. They were hoping that the space would be highly disconnected with lots of barriers to evolution. Wagner's research shows that these barriers are rare, and that a random walk via viable neighbors will get about 80% of the way through the vast, hyperdimensional space before it encounters a barrier. And keep in mind that these random walks are choosing one viable neighbor at a time. Evolution has no such restriction. Fitness isn't like islands in a vast ocean, divided by seas too big to cross, as KF would have you believe. It's closer to a vast continent with a few lakes and ponds in it. The only way for evolution to get stuck is if it happens to land on an island in the middle of a lake. Wagner's research is fascinating. As usual, ID gets mauled by the steady advance of science. keith s
keith s: "You are whistling past the graveyard." An interesting metaphor. Thank you. gpuccio
Astroman: Excerpts have been quoted here, and comments made. It is my privilege, and anyone else's on this blog, to comment on those things. Reasonably, and with the necessary cautions. gpuccio
Enkidu: Sorry for your confusion. dFSCI is a term I use, but it is a subset of CSI. The point is that it restricts specification to functional specification, which I have defined with some detail, and to digital forms of information. Keith is only partially right. It is indeed a subset, and not a "slightly modified version". And, obviously, there are no flaws in the concepts of CSI and dFSCI. Least of all any circularity (I have answered in detail keith's "argument" here: https://uncommondesc.wpengine.com/intelligent-design/no-bomb-after-10-years/ at post 1239. Strangely, he is not convinced. :) If you are interested in the ideas, we can discuss them at your ease. If your discussion is of the kind "Why hasn’t the news gotten out?", I am afraid I cannot help. gpuccio
Me_Think: Those are only statements out of context. Speaking of metabolic paths is not the same as speaking of proteins. From the little that has been reported here, I see nothing new on this "book": just a recycling of old trivial (and wrong) arguments in favor of NS, essentially the line which tries to demonstrate that the functional space is full of everything imaginable and desirable. But, as I said, it is not fair to comment without knowing the details of the argument. In the meantime, let's leave the propaganda speakers to their "work". gpuccio
Keith @ 217
I downloaded the book onto my Kindle, and sure enough, it confirms what I gathered from the interview. This is extremely bad news for ID.
It undermines both ID and Darwin : Quote from the same book: The power of natural selection is beyond dispute, but this power has limits. Natural selection can preserve innovations, but it cannot create them. And calling the change that creates them random is just another way of admitting our ignorance about it. For ID the key sentence undermining search probability is this: we found out, for example to within 85 percent of the second circuit’s wiring for circuits of twenty genes. In other words, starting from anywhere in the library—anywhere—you need not walk very far, only fifteen steps away from a genotype network, before finding the genotype network of any other circuit. It is as if your needle were always nearby, no matter where you started to search The last sentence uncannily answers KF's many posts Me_Think
dFSCI is just a slightly modified version of Dembski's CSI, and it suffers from most of the same flaws. One of the biggest flaws is the circularity involved in using CSI or dFSCI to prove that something could not have evolved. I explain that here. keith s
For having never read the book, some of you ID proponents sure are defending what you think it says. If reading the book is so critical to making informed posts/comments about it, why didn't UD and its supporters wait until all ID proponents had read the book before posting/commenting about it? Astroman
Is this a second bomb for Barry?
We've known for a long time that the 'islands of fitness' idea was dead (though KF and gpuccio are of course in denial). Wagner's work is a very nice, detailed nail in its coffin, however. Here's another nice excerpt from the same chapter:
When I asked it to calculate D for hundreds of pairs of bacteria, I was surprised to see -- though their highly diverse genomes should have warned me -- that even closely related organisms had highly diverse metabolic texts. Thirteen different strains of E. coli differed in more than 20 percent of their enzymes. An average pair of microbes differed in more than half of them. I had also suspected that bacteria living in the same environment -- the soil, for example, or the ocean -- might encounter similar nutrients and thus have similar metabolic texts. Wrong. Their metabolic texts were just as diverse, with a D just as different as that from bacteria living in different environments.
The Designer sure is a busy guy, isn't he? keith s
OK, I'm confused. After much Googling I discovered what dFSCI is supposed to be. The interesting thing is that there only seems to be one person on the whole planet actually using the term, someone named G. Puccio. If dFSCI really is the tool that once and for all proves Intelligent Design shouldn't it be all over science journals everywhere? Not to mention earning the author a Nobel prize for what would arguably be the greatest scientific discovery of all time? Why hasn't the news gotten out? Enkidu
Rich, to gpuccio:
So stop blogging and get to the lab!
And by all means, submit it to the well-respected, peer-reviewed journal BIO-Complexity! keith s
Is this a second bomb for Barry? Rich
gpuccio, I didn't say the news was "stunning". I said it was extremely bad news for ID and that it undermines the ID position. All of which is true. (Though I am looking forward to seeing how you, KF and the others attempt to spin it.) The library metaphor is of course not new, but the data about the sheer number of glucose-viable metabolisms (10^750!) is, as is the fact that you can make it 80% of the way through the design space by hopping from one viable point to another. Very bad news for ID, gpuccio. You are whistling past the graveyard. keith s
Great! "It’s easy. There is not a list. There is a scientific procedure." let's go step by step! "There is not a list" That gives Rich a sad :( BUT! You can make one, because: "There is a scientific procedure" and "It’s easy" So stop blogging and get to the lab! This is great news for ID and the breakthrough we've all been waiting for. We can even tighten up the definition some.. Less talky, more sciencey, IDists! Rich
Rich asked So what is and isn’t designed? A list would be useful. That got me thinking. Some IDers like to offer "The Privileged Planet" as their ID model. That means the whole universe and everything in it is designed. So shouldn't everything in the universe be awash with CSI and dFSCI and EIEIO and all the ID acronyms? We should be tripping over the stuff and yet no one's ever found any except the unsupported assertion that it's in DNA. Enkidu
k, you originally said: “Actually, atheists and the like can be design thinkers, and some are.” And I responded with: “Please name several or more atheists who accept “Intelligent Design”. Can you name even one?” You said atheists and I asked you to name atheists. I’m still waiting. And can you show me that the non-atheists Denton and Berlinski accept and promote your particular version of “Intelligent Design”? Can you show that any ID proponents besides you even understand your version of “Intelligent Design” P.S. Digging up corpses, making things up about the people and corpses you named, and adding a steaming pile of distractive, evasive, irrelevant, accusatory garbage, all in a futile attempt to support your claim that some atheists are "design thinkers" (one of your terms for "Intelligent Design" proponents), just shows how dishonest and desperate you are. P.P.S. Who or what do you mean by “and the like” in “atheists and the like”? Are you referring to allegedly evil sinners who don’t hold your Christian, Evangelical, Fundamentalist, ID-Creation beliefs? Astroman
keith s: Frankly, the only interesting thing in this "universal library" staff is how it recycles Borges and his sweet, pessimistic and poetic view of reality. gpuccio
Rich: Before you accuse me of intellectual ambiguity: it should be dFSCI in the previous post, both times. dFSCO is a typo, not a new concept of mine. :) gpuccio
Rich: It's easy. There is not a list. There is a scientific procedure. Objects exhibiting dFSCO are best explained by design. All the rest can be designed or not, but we cannot detect design scientifically in those cases. Believing in the design of everything, or of some things, is a philosophical or religious problem. Detecting design by dFSCI is a scientific procedure. Do you understand that? For icecream , I let you decide... :) gpuccio
Gpuccio, Care to tell us which version of ID you subscribe to? The highly specific" certain features of the universe and of living things are best explained by an intelligent cause"? Because even I agree with that providing icecream is a "certain feature of the universe". So what is and isn't designed? A list would be useful. Rich
keith s: I suggest you stay calm. :) The non functional specificity of the anti freeze proteins has always been well known. Behe has clearly debated that in TEOE. Where is the stunning news? I have not read Wagner's book, but I have read similar considerations about metabolic pathways elsewhere. Again, they bear no relevance to the ID argument. I will wait, hoewever, to know more about this specific argument before discussing it, to be correct. I think we will easily survive these "bad news" :) . gpuccio
" For example, the number of metabolisms with two thousand reactions that are viable on glucose exceeds 10^750." Reload, Denyse! You still have one good foot. ...And after years of googling "evolution can't", Densye suddenly wished she understood more about evolution. Rich
I downloaded the book onto my Kindle, and sure enough, it confirms what I gathered from the interview. This is extremely bad news for ID. Here's a key passage from Chapter 3, The Universal Library:
João computed the answer and quickly found that not one, not two, not three, but hundreds of E. coli's neighbors are viable on glucose. This discovery contained a simple but vital lesson: The uniqueness of this phenotype is but a deeply flawed prejudice. The neighborhood of one text contains many other viable texts like it. But nothing prepared us for what came next, when we began to venture further. João used E. coli as a starting point for deep probes of the metabolic library that led further and further away from the starting text. The objective was to learn how far we could travel -- hopping from one viable text to a viable neighbor, to the neighbor's neighbor, and so on -- without losing viability on glucose. How radically could a metabolic text be edited without losing its meaning? When João showed me the answer, my first reaction was disbelief. The furthest viable metabolism he found -- the one with the highest D -- shared only 20 percent of its reactions with E. coli. We had walked, computationally speaking, almost all the way through the library -- 80 percent of the distance that separates the furthest volumes -- before we were finally unable to find a glucose-viable text by taking a single step. Worried that this might be a fluke, I asked João for many more random walks, a thousand more, each preserving metabolic meaning, each leading as far as possible, each leaving in a different direction -- possible only because the library has so many dimensions [Hi, kairosfocus!]. When the answer came back, I was stunned once again. These random walks had led just as far away as the first one. Each of them lead to a metabolism that differed in almost 80 percent of its reactions from E. coli. They had found a thousand metabolic texts that shared very little with E. coli, except that all of them could produce everything a cell needs from the carbon and energy stored in glucose. If we had kept on walking, we would have found even more texts, too many even to count, although later we were able to estimate their number in parts of the library. For example, the number of metabolisms with two thousand reactions that are viable on glucose exceeds 10^750. The number of texts with the same meaning is itself hyperastronomical. The metabolic library is packed to its rafters with books that tell the same story in different ways. While we surely had not expected this, our explorations had revealed an even more bizarre feature of the library. The thousand random walks did not end in a few stacks of the library, where texts with similar meanings might huddle in small groups -- groups of metabolisms with similar sets of reactions. These texts were just as different from each other as they were from that of E. coli -- they encoded metabolisms with very different sets of chemical reactions. The library does not have clearly distinct sections, like rooms that separate all texts on history from those on science.
This is terrible news for ID. You had all better pray that a major flaw is discovered in Wagner's work. So much for the "needle in the haystack" and "islands of function" rhetoric. Better get the message out on the secret ID network: ixnay on the agnerWay. Perhaps someone will close comments on this thread, like they did on the other one. This thread is a spectacular own goal, Denyse. Thank you. keith s
Also, since it doesn't appear to have dawned on any of the IDers here, Wagner's ideas do not bolster the ID position at all. They undermine it. Read this part of the interview carefully:
Think of a library that is so large that it contains all possible strings of letters. Each volume in this library contains a different string, and there would be many more volumes than there are atoms in the universe. We could call that a universal library. It would contain a lot of nonsense, but it would also contain a lot of interesting texts—your biography, my biography, the life story of every human who’s been alive, the political history of every country, all novels ever written. And it would also contain descriptions of every single technological innovation, from fire to the steam engine, to innovations we haven’t made yet. Nature innovates with libraries much like that one. A protein is basically a string of letters, corresponding to one of 20 different kinds of amino acid building blocks in humans. A single protein could be 100 amino acids. So we can think of a library of all possible amino acid texts. When evolution changes organisms, what it does is explore this library through random changes in DNA, which are then translated by organisms into changes in the amino acid sequences of proteins. A population is like a crowd of readers that goes from one text to the next. Now, how would you organize a library if you wanted to easily find the text on a particular technology? You would have a catalog, and have all the texts about, say, transistors in one section of the library. That works for us because we can read catalogs, but in nature’s library it’s very different. Evolution doesn’t have a catalog; its readers explore the library through random steps. There’s also something that’s very curious about this library. You think of antifreeze proteins as being a solution to a very specific problem that nature faces: “How do I keep this fish alive?” You may think there is only one single solution to this, one amino acid strain that provides this protection. But if that were the case, evolution would have a serious problem, because the library is so huge it could never be explored in the 4 billion years life has been around, or even 40 billion years. But it turns out there is not just one text that solves the problem, there are myriad texts that all have a different amino acid sequence specifying antifreeze protection. And these texts are not clustered in one corner of the library; they’re spread out all over…
If Wagner is correct, this is one more blow to the bogus 'islands of function' defense of ID. You had better hope that he turns out to be wrong. It will be interesting to read about his research. Thanks for bringing it to my attention, Denyse! I'm sure KF thanks you too. :-) keith s
Vishnu, You seem to be slightly brighter than Mung, though equally obnoxious. Let me try to explain this to you instead of to him. In the interview, Wagner stresses that mutations are random and that selection operates on them. That is a thoroughly Darwinian position. Here's how the interview closes:
WSF: What’s the main thing you want readers to take away from the book? AW: That there’s a fascinating world out there that Darwin didn’t have any idea about, and that really helps us explain how evolution can work. Evolution has been criticized from various quarters by people who say, “well, it can’t all just be random change.” The book shows principles that are in agreement with Darwinism, but go beyond it. There’s rhyme and reason to how life evolved. [Emphasis added]
I wrote:
There is nothing anti-Darwinian about Wagner’s thesis. DNA changes randomly, as he stresses. It’s nothing but selection working on random mutations. Wagner just adds — and this is not original to him, by any means — that the fitness landscape isn’t limited to one solution per problem. We knew that already, though IDers like kairosfocus try to downplay it with their “islands of function” rhetoric.
So when Mung claimed that Wagner's position was "non-darwinian" -- on the basis of a book announcement, fercrissakes -- he was wrong. You screwed up when you bought into Mung's story. keith s
Vishu Mung Keiths : This is what the whole book is about. Excerpt from book : The power of natural selection is beyond dispute, but this power has limits. Natural selection can preserve innovations, but it cannot create them. And calling the change that creates them random is just another way of admitting our ignorance about it. Me_Think
keiths: ok, you stupid IDists, I haven't read the book and I don't need to read the book. I read an interview. An interview trumps any content of any book! Mung
Look, keiths is either A) a true believer (despite the evidence), B) and idiot, or C) a troll Either way, he lost his credibility with genuine intellects long ago who have called the bluff of blind watchmaker bullsh*t. His bomb was a fart. And his mind is not far behind. Buzz off keiths. Your entertainment value has crash through the floor. Vishnu
keiths weighs in on the side of nonsensical arguments all the way down. Why am I not surprised? Mung
Never mind, Mung. If the difference between a book and a book announcement baffles you, then this will definitely be out of your intellectual reach. Readers without Mung's limitations can find the argument here: The Myth of Absolute Certainty keith s
keiths:
No, Mung’s objective was to argue that Wagner’s book was non-Darwinian, when it clearly isn’t.
You haven't read Wager's book, so you don't know. Even if you did know, your logic is fallible. Must be wonderful to be a true skeptic. Mung
keiths:
Logic is a function of our fallible human minds. If our minds are fallible, then our logic might be incorrect. If we can’t be absolutely certain that our logic is correct, then we can’t be absolutely certain of the conclusion of the argument.
So why are you here? Why do you argue at all? You claim that logic is a function of our fallible humans minds. How do you know this? And why should we believe your claim that only human minds are capable of employing logic? Don't computers use logic? Are computers fallible? Are there not other organisms, other than humans, that employ logic? Do they have minds? Are their minds fallible? How do you know? More on keiths' self-refuting nonsense:
Logic is a function of our fallible human minds. If our minds are fallible, then our logic might be incorrect. If we can’t be absolutely certain that our logic is correct, then we can’t be absolutely certain of the conclusion of the argument.
Unfortunately for you, your argument relies on logic. It's conclusion is uncertain. Therefore we have no reason to believe it is true. Mung
Forgive the lousy (hic) editing on the Saturday night :D Vishnu
mung: My guess is that keiths is trolling.</blockquote< My guess too, as stated previously. Earth to keith: did you actually read the book or do you plan to? Oh well, Who cares, really. I guess I'm officially losing interesting in the answer.
Vishnu
Just wondering. Me neither. Rich
rich:
Have you read the book, Mung?
No, I haven't. But I was aware of it before it ever came up here at UD. Why do you ask? Mung
wd400:
Mung’s mistake above, where he seems to treat each individual as a seperate entity unlinked to others is another example or the concepts relevance.
Each individual is a separate entity. That's what it means to be an individual. Now I would really like you to explain how you concluded that I treat [ignoring the weasel word 'seems'] each individual as a separate entity unlinked to others. Now here's a bit of irony to consider. I actually believe in the species concept. Any good Aristotelian does. But Darwinian don't. lol! So while I must admit that some individuals are linked to others, I find it very odd to be accused of just the opposite by someone who necessarily believes there are no real species! But that's likely to get us all off on a tangent. The primary claim put forth by Thornton and keiths and taken up by wd400, is that evolution just ain't like that. And you IDiots are a bunch of uneducated dolts for thinking otherwise. I happen to love this topic, because it fits in with evolutionary algorithms, which are supposed to prove evolution. Take your average GA. Each individual is assigned a "genome" by random means. That's about as far apart in "genome space" as you could desire. Yet they are all treated as if member of the same population to which the same fitness criteria applies. Yet I am the one who doesn't understand how evolutionary search works in the real world. Perhaps wd400 will explain how recombination between individuals of separate populations, species, families, genera, etc. keeps everyone on the same trajectory, and how this means that the entire space of all possible genomes doesn't need to be explored. Then perhaps wd400 can explain just how magical and mysterious and yes, miraculous, it is that such diversity of life can come about through exploration of only a tiny and closely related area of the space of all possible genomes. Life is a miracle. Every living thing is a miracle. There is no known mechanism for finding the miraculous, Darwinian or otherwise. Mung
Have you read the book, Mung? Rich
Yet constantly we observe “covergence” between distant species, proteins that are virtually the same from unrelated species supposedly evolved multiple times.
Constantly? I can think of a couple of examples that might fit this bill (anti-freeze proteins being one), but not many. What did you ahve in mind? wd400
keiths: Mung quoted from the book announcement. Therefore Mung does not know he difference between a book and the book announcement. Mung: keiths quoted from the book announcement. Therefore keiths does not know the difference between a book and the book announcement. Yes, folks, it's true. This is the sort of interesting, exciting and relevant debate that allowing the regulars from TSZ to post here at UD engenders. I think it's high time we allowed Thornton back. Votes? Mung
Dr JDD said: "So how does the mathematics look for a particular protein arising multiple times in multiple species unrelated? Doesn’t quite fit into that idea of Wagners. All species on Earth are related. It's just a question of how far back in time you have to go to get to the last common ancestor. Convergent evolution is also a well understood phenomenon even at the molecular level. Similar environmental pressures can lead to similar solutions since the laws of physics work the same for all animals. Enkidu
Since this seems to be a sore spot with keiths: Mung: And here I thought I was arguing that people should read the book. See my post @ 64 how keiths manages to construe my encouraging people to actually read the book means that I don't know the difference between the book and an announcement about the book will probably forever remain a mystery. My guess is that keiths is trolling. He's trying to provoke an emotional response and has no interest in what is actually true or not true. Mung
keiths:
Apparently Vishnu, like Mung, can’t tell the difference between a book and a book announcement.
keiths doesn't care for the book announcement. I say fine, read the book. keiths can make proclamations about the author's position without reading he book. I say keiths ought to read the book. keiths claims I can't tell the difference between the book announcement and the book. To which I say, if I order the book from amazon and receive the book announcement and nto the book, I will not be pleased this. keiths has to make things up and lives in a fantasy world, and that is a true shame. I mean that sincerely. Mung
So let me get this straight - Wagner's reduced claim is that there are many available solutions to a problem hence why random mutations can overcome the seemingly improbable solution. Yet constantly we observe "covergence" between distant species, proteins that are virtually the same from unrelated species supposedly evolved multiple times. So how does the mathematics look for a particular protein arising multiple times in multiple species unrelated? Doesn't quite fit into that idea of Wagners. Dr JDD
It's neat that physicists are getting involved in the study of Evolution. Lots of cool papers being published. It can only help the BioEvo guys/gals. New perspectives sure help. Dr Wagner sees the writing on the wall. Lots of equations written on that wall lol. Natural Laws. Lots of incredibly tuned Natural Laws. "Blind Watchmaker" can retire soon:) ppolish
Keiths: I went by Wagner’s words, and Mung went by the book announcement... You foolishly bought into Mung’s tall tale.
My original post, quoting you and Mung:
keiths: You [Mung] were arguing that a book announcement written by some marketing person should take precedence over Wagner’s own words. Not a smart move. mung: And you reached this conclusion from my statements that reading the book [i.e., reading Wagner's own words] should take precedence?
And I was "astonished" by it. Then you reply:
Apparently Vishnu, like Mung, can’t tell the difference between a book and a book announcement.
How do you get that from my post? You accused Mung of asserting that he prefers marketing blurbs over the book itself, for which there is no justification based on what Mung wrote. Then you write:
No. Here’s what I actually wrote: All you have to do is read the interview to see that the book announcement is hype. But since you insist on being spoon-fed, open wide for the choo-choo train.
Which, of course, has nothing to do with my original post. Are you having trouble following?
No, Mung’s objective was to argue that Wagner’s book was non-Darwinian, when it clearly isn’t.
That bizarre given Wagner's own words which you quoted in bold, which affirm the opposite of your assertion, for all intents and purposes:
Random chance still plays a role—we know that the DNA of organisms changes randomly. But there’s actually an organization process that helps these organisms discover new things.
Helps? Which is this process that "helps" the Darwinian (RV+NS) process? If it's not the Darwinian process itself, it cannot be Darwinian. Duh. If it's not Darwinian then we're officially outside the Darwinian paradigm.
Evolution has been criticized from various quarters by people who say, “well, it can’t all just be random change.” The book shows principles that are in agreement with Darwinism, but go beyond it. There’s rhyme and reason to how life evolved.
Go beyond Darwinism? Rhyme? Reason? If you "go beyond [Darwinism]" then, again, we're officially outside the Darwinian paradigm. Sounds like the blurb was right, it is in accord with Wagner's own words, and Mung interpreted it correctly. Reading your words is like watching a guy shoot himself in the foot in order to prove the gun doesn't work. Again, astonishing. At any rate, seems like an interesting book to me. I suggest you read it. I will. Now, I ask again: have you read the book? Vishnu
Vishnu,
Nice try but Mung’s obvious objective was to try and A) get you to actually read the book and/or B) asked if you if had read the book.
No, Mung's objective was to argue that Wagner's book was non-Darwinian, when it clearly isn't. I repeat:
Vishnu,
You made the accusation that Mung was preferring the announcement contents over the book,
No. Here’s what I actually wrote:
All you have to do is read the interview to see that the book announcement is hype.
But since you insist on being spoon-fed, open wide for the choo-choo train. I quoted the interview with Wagner, and wrote:
There is nothing anti-Darwinian about Wagner’s thesis. DNA changes randomly, as he stresses. It’s nothing but selection working on random mutations.
Mung quoted the book announcement, and wrote:
That’s about as non-darwinian as you can get.
I went by Wagner’s words, and Mung went by the book announcement. You foolishly bought into Mung’s tall tale.
keith s
KF: "PS: I just saw an assertion by DD that targets a particular objector rather personally, that is well over the line of civility, DD." Onlookers, I am glad to see that MF has finally banned one of the many obviously abusive ID supporters, rather than concentrating on loosely inferred insults by ID opponents. I commend him for this action. I will actually praise him when he bans Joe. william spearshake
Onlookers, notice the confident manner assertions of rhetorical victory by design objectors unable to provide empirical observation rooted evidence that blind chance and mechanical necessity can and do in our experience produce FSCO/I with any reasonable likelihood. KF PS: I just saw an assertion by DD that targets a particular objector rather personally, that is well over the line of civility, DD. kairosfocus
A: you don't know what you are talking about. Denton, wrote his key work in 1986 or so as an agnostic, and in recent years -- as indicated -- has been shifting views to some sort of pantheism I think, probably under the influence of the pattern of evidence and his inclinations. Berlinski is simply not a Bible-believing Creationist of any stripe, he is an agnostic or atheist. Hoyle is generally reckoned a lifelong atheist, though he may well have trended agnostic in the years leading to his death in the late 90's. Plato, way back, was a philosophical pioneer. Flew was the leading philosophical atheist in the world for decades, but under the impact of design evidence which he came to accept, became a deist. I should note, that atheism is a bit of a slippery term because of the problem of direct or implied knowledge-claims of the non existence of God. That sort of universal negative claim is very hard for a finite and fallible thinker to back up and leads to some challenging philosophical hot water. That holds also for those who try the claim to be without belief in God, in a context of implying they know good reason for such disbelief; especially post Plantinga's free will defense that irretrievably shattered the deductive form of the argument from evil and counter-weighted the inductive form. As I noted, it is possible and in fact actual in some cases to be both atheist and design thinker. Some form of stoicism seems to be involved. KF kairosfocus
DavidD:
Why don’t we discuss your Gay Porn website Alan and your published books on the homosexual lifestyle from the French Perspective. Let’s talk about how these debates have nothing to do with science for you but rather your hatred of other’s religious worldview which conflicts with your own.
This entire thread has been a train wreck for the IDers, but David's comment takes the cake. keith s
Phoodoo,
Now, since you don’t think that Wagner is introducing anything new, and that we have know for a while that random mutations can not account for all the diversity we see in life, What is the definition of the Theory of Evolution?
That's an easy one. Wagner is not claiming that random mutations don't account for the diversity of life. Read the quote that Keith S. reproduced way back at the top of the thread. wd400
k, Denton describes himself as agnostic but is clearly a creationist of some flavor. Berlinski also describes himself as agnostic but is clearly a creationist of some flavor, and even though he strongly criticizes Evolutionary Theory he "does not openly avow intelligent design and describes his relationship with the idea as: "warm but distant. It's the same attitude that I display in public toward my ex-wives."" (quoted from Wikipedia) According to hoylehistory.com: "Hoyle was reportedly an atheist during most of his early life, but became agnostic when he found that he could not feel comfortable trying to explain the finer workings of physics and the Universe as simply “an accident.”" Besides, he's been dead for over 13 years and I seriously doubt that he ever accepted or promoted "Intelligent Design" as it is currently marketed. k, you originally said: “Actually, atheists and the like can be design thinkers, and some are.” And I responded with: "Please name several or more atheists who accept “Intelligent Design”. Can you name even one?" You said atheists and I asked you to name atheists. I'm still waiting. And can you show me that the non-atheists Denton and Berlinski accept and promote your particular version of "Intelligent Design"? Can you show that any ID proponents besides you even understand your version of "Intelligent Design"? P.S. To see you, a person who constantly appeals to authority, complain about appeals to authority is pretty funny, especially since you also portray yourself as a profound authority on many subjects. P.P.S. Who or what do you mean by "and the like" in "atheists and the like"? Are you referring to allegedly evil sinners who don't hold your Christian, Evangelical, Fundamentalist, ID-Creation beliefs? Astroman
Vishnu: I quoted the interview with Wagner... Mung quoted the book announcement... "That’s about as non-darwinian as you can get."... I went by Wagner’s words, and Mung went by the book announcement.
Nice try but Mung's obvious objective was to try and A) get you to actually read the book and/or B) asked if you if had read the book. Listening to an interview is not reading the book. I've listened to a lot of interviews in my life, and it's not like reading the book. Some people don't interview well or there is incomplete information or context to develop the ideas that are in the book. That's why people write books and don't merely do interviews. Mung's objective was clear. So, keiths, have you actually read the book? Vishnu
Bystander why did you truncate the Darwin quote? Here is the whole thing
To suppose that the eye with all its inimitable contrivances for adjusting the focus to different distances, for admitting different amounts of light, and for the correction of spherical and chromatic aberration, could have been formed by natural selection, seems, I freely confess, absurd in the highest degree. When it was first said that the sun stood still and the world turned round, the common sense of mankind declared the doctrine false; but the old saying of Vox populi, vox Dei, as every philosopher knows, cannot be trusted in science. Reason tells me, that if numerous gradations from a simple and imperfect eye to one complex and perfect can be shown to exist, each grade being useful to its possessor, as is certainly the case; if further, the eye ever varies and the variations be inherited, as is likewise certainly the case; and if such variations should be useful to any animal under changing conditions of life, then the difficulty of believing that a perfect and complex eye could be formed by natural selection, though insuperable by our imagination, should not be considered as subversive of the theory. How a nerve comes to be sensitive to light, hardly concerns us more than how life itself originated; but I may remark that, as some of the lowest organisms in which nerves cannot be detected, are capable of perceiving light, it does not seem impossible that certain sensitive elements in their sarcode should become aggregated and developed into nerves, endowed with this special sensibility
The first sentence is merely used as a framing device. The explanation follows immediately afterwards. Darwin has been proved right because such numerous gradations in eye complexity have indeed been found in nature. Quoting just the first sentence paints a false and misleading picture of Darwin's views. I'm sure that was just an accidental oversight on your part and that you'll do better in the future. Enkidu
Ok Back on track. Here's what Darwin says about Natural selection: Darwin (1872), chapter 6, page 170. To suppose that the eye with all its inimitable contrivances for adjusting the focus to different distances, for admitting different amounts of light, and for the correction of spherical and chromatic aberration, could have been formed by natural selection, seems, I freely confess, absurd in the highest degree. the bystander
DavidD said "Why don’t we discuss your Gay Porn website Alan and your published books on the homosexual lifestyle from the French Perspective." How does making slurs like that further the Intelligent Design cause? Seems like the kind of remark a man with no valid argument would make. Enkidu
DavidD writes:
Why don’t we discuss your Gay Porn website Alan and your published books on the homosexual lifestyle from the French Perspective. Let’s talk about how these debates have nothing to do with science for you but rather your hatred of other’s religious worldview which conflicts with your own. Explain to us how Science is a nothing more than a lazy man’s post for you to lean on.
I'd certainly be interested in learning more about my gay porn website and books I have written. Where are you getting this information, David, and why aren't I getting the revenues? A variation on the old "atheists hate God" routine. Why is that relevant to the scientific theory of ID? I thought ID was scientific, not religious. Alan Fox
And you can try Michael Denton on the Agnostic trending Pantheist/ Panentheist — he authored one of the seminal works that directly led to the rise of a modern design theory.
One of the things that Denton did was separate evolution into general and special evolution. Special evolution was essentially modern day genetics which no one disputes as relevant and valuable for things like medicine or agriculture but not applicable to the real evolution debate. General evolution is where the real debate occurs. It is the macro-evolution of Darwin or the creation of new species with novel capabilities. What we are getting is that the anti-ID people espousing the special evolution but failing to address the core issue of general evolution. jerry
10-4. My apologies for getting caught up in their game. I am still too willing to fight. I will get over it. My apologies to Uncommon Descent. I will search for a better way to respond to the crap hurled at us. Joe
Joe, please refrain from namecalling and other trollish tactics. KF kairosfocus
A, passed back again. Try first, foremost Sir Fred Hoyle, Nobel Prize equivalent holding astrophysicist, on the fine tuning of the cosmos, monkeying with physics and chemistry, and the passage in his Omni Lec where he used the term Intelligent Design, that may well be the direct root of the term. Then, go have a look at a certain DI fellow, David Berlinski [BTW of Jewish background]. And you can try Michael Denton on the Agnostic trending Pantheist/ Panentheist -- he authored one of the seminal works that directly led to the rise of a modern design theory. The world of thought is a lot more complex than the simplistic agitprop picture painted by NCSE et al. BTW, if you take time to read The Laws Bk X, by Plato, you will see how as a pagan philosopher he first carefully and subtly distinguishes himself from classic paganism [he was not going to go the Court ordered hemlock drink route] then proceeds to make a first design inference argument, ending up with what would be a start point for what would be called the God of the Philosophers. On a cosmological design inference -- and BTW back on this thread's focus, an inference to laws that wrote OOL and drove evo of life by programming it into physics, chemistry etc as shaping forces and materials of nature is a cosmological front loading hypothesis, one that invites a design of nature inference that would have astonished Hoyle. In short, your plain hostility to Bible-believing Evangelical Christians, and an obvious overdose of fever swamp indoctrination are leading you far astray. KF PS: Just for reference, here is that worldviews 101 argument again. PPS: I simply have no interest in peer reviewed publications, beyond what happens with a public discussion. There are dozens of such, and the underlying argument by appeal to authority runs into the problem that no peer reviewed panel or other authority is better than underlying facts, reasoning and assumptions. So, what, exactly on the merits do you have to object to that undermines the fact that on trillions of cases FSCO/I where we see it being made, is a reliable sign of design -- intelligently directed configuration -- as causal process? Or, to the point that on inference to best current causal explanation, FSCO/I is therefore a reliable sign of design? What is your objection to the point that starting from Darwin's pond or the like, the vera causa grounded explanation for gated, encapsulated metabolising life using coded genetic info and execution machines such as ribosomes and with a von Neumann self replicator is that the FSCO/I in that is best explained on design? Or, that the further explanation for that in major body plans is the same? Other than, open and brazen or by the back door locking out of design by a priori evolutionary materialism or some fellow traveller ideology? kairosfocus
Run away, astroboy, run away. It's the evo thing to do when faced with reality... Joe
astro-projecdtionist:
Your insecurities and desire for authority are obvious.
Nice projection. By some authority only is there an evolutionary theory. By your insecurities blind watchmaker evolution is not the alleged evolutionary theory. Nice one astro... Joe
astroboy:
Joe, what have you won?
I won by the fact that blind watchmaker evolution is the alleged evolutionary theory and it is bogus because it is untestable. Joe
For astroman:
One of the central ideas in the conventional approach to evolutionary change is that DNA alterations are accidental- they arise from unavoidable errors in the replication process or from physio-chemical damage to DNA molecules.- James Shapiro, "Evolution: A View friom the 21st Century", page 12 (2011)
Just more support for my claim. Joe
Recombination is indeed basic genetics, but it also relevant to evolutionary biology. The tension between recombination and selection, for instance, is the most important question to understanding speciation. Mung’s mistake above, where he seems to treat each individual as a seperate entity unlinked to others is another example or the concepts relevance.
It is indeed part of genetics and part of evolution no matter how one defines the term evolution but it only leads to trivial changes in life forms over eons and thus is not a factor in the overall debate. To imply so is a diversion from the real debate. If one disagrees then they are obliged to provide evidence that it has led to anything meaningful. Will Provine said he had faith that these small changes led to larger meaningful changes. He had no evidence and all he had was his faith and hope that it would some day be proven. It is an example of an atheist having both faith and hope but essentially no charity. jerry
Joe said: "And I won." Joe, what have you won? Your distorted view of winning is all that matters to you, isn't it? Your insecurities and desire for authority are obvious. Astroman
astroman:
He keeps playing worn out games with that term.
That is your uneducated opinion. Joe
astroman:
Your persistence in being evasive and moving goal posts is noted. I didn’t ask about “evolutionism” or what “evos” do or don’t do.
That is what the search refers to- evolutionism, duh.
Are you saying that kairosfocus and all other ID proponents who go on and on about searches are wrong?
You are dense. They do so in the context of evolutionsim and yes, they are wrong as evolutionism is not a search. By calling it a search they are giving evolutionism an undeserved boost. OTOH intelligent design evolution would be a guided search. Joe
astro:
I never said that “blind watchmaker” is my position,
So you don't accept unguided evolution. The alleged theory of evolution is blind watchmaker evolution.
and you’re the one who erroneously keeps tossing the term “blind watchmaker” into the debate as though it is synonymous with Evolutionary Theory.
It is synonymous with the alleged evolutionary theory. Joe
DavidD, I never said that Joe invented the term "blind watchmaker". He keeps playing worn out games with that term. Who is the "religious clergyman" that you're referring to and what makes you think that he's mine? Astroman
Joe asked: " Are you now admitting your position isn’t relevant?" I never said that "blind watchmaker" is my position, and you're the one who erroneously keeps tossing the term "blind watchmaker" into the debate as though it is synonymous with Evolutionary Theory. Astroman
Joe said: "They are wrt evolutionism. Even Dembski now acknowledges that. However they are giving it the benefit of the doubt by calling it a search and at least attempting to model it. That is far more than evos do." Your persistence in being evasive and moving goal posts is noted. I didn't ask about "evolutionism" or what "evos" do or don't do. I'll try again: Joe, please explain what you mean. Are you saying that kairosfocus and all other ID proponents who go on and on about searches are wrong? Are they wrong in claiming that searches are involved in evolution? Are they wrong in the way they describe searches in evolution? Are you saying something else? If so, please be specific, and what exactly is it that Dembski now acknowledges? Astroman
Thank you DavidD- they really think I invented it because they call it "Joe's strawman". Amazing... Joe
Astroman "Joe, your blind watchmaker games were played out long ago. Try something that’s new and relevant." Joe's not the one who invented and insists upon "Blindwatch Maker" evolution, one of your religious clergyman did that. He keeps asking for proof, but all he gets is backwash. DavidD
Why do some evos think that blind watchmaker evolution is not relevant? That is their position and they are admitting that?! Really?! Sweet Joe
71 keith s October 31, 2014 at 9:49 pm I’m not confused, cantor. I’m calling your bluff.
Three times I've offered to tutor you if you would explain what is confusing you. Three times you've dodged and weaved. I'm calling your bluff. cantor
astroman:
Joe, your blind watchmaker games were played out long ago.
And I won. Unguided evolution is blind watchmaker evolution. Natural selection is still blind and mindless
Try something that’s new and relevant.
Blind watchmaker evolution is the opposing position. Are you now admitting your position isn't relevant?
I see that you want all of your opponents to be expelled.
I see that you are still a twisted jerk. Joe
Bystander "Of course not. Just that most scientific ID arguments is based on a lot of things written by him, including famous/infamous(depending on which camp you are in) Complex Specified Information." But again, what does this have to do with what I originally wrote to Keith and his failure to respond to my question as opposed to the usual deflection by Evos ? DavidD
wd400:
Recombination is indeed basic genetics,...
The question is- "Is recombination a blind watchmaker or intelligent design mechanism?" Joe
Joe, your blind watchmaker games were played out long ago. Try something that's new and relevant. I see that you want all of your opponents to be expelled. Astroman
keith s:
As you can see, the ‘islands of function’ argument is dead in the water, so to speak.
Unguided evolution was stillborn- it never had a chance (pun intended). :razz: Joe
David @150
“Dembski is proponent of ID.” So what ? What does that have to do with my believing everything that he writes down with keyboard or pen ? Does everyone have to pass a requirement for belief to be able to post ?
Of course not. Just that most scientific ID arguments is based on a lot of things written by him, including famous/infamous(depending on which camp you are in) Complex Specified Information. the bystander
astorman:
Joe, please explain what you mean. Are you saying that kairosfocus and all other ID proponents who go on and on about searches are wrong?
They are wrt evolutionism. Even Dembski now acknowledges that. However they are giving it the benefit of the doubt by calling it a search and at least attempting to model it. That is far more than evos do. Joe
keith s:
Could you explain why Thorton gets banned for referring to “moronic crackpots”, while Joe remains despite saying things that are far more incendiary, for months on end?
thorton incites and I merely respond (you can use any evo in place of thorton). Get rid of the antagonists and I am OK. Joe
Astroman- Has anyone submitted their blind watchmaker claims to any peer reviewed scientific publications? If not, why not? If your claims have scientific merit I would think that you would have confidence in them and would try hard to get them published. Joe
Bystander "Dembski is proponent of ID." So what ? What does that have to do with my believing everything that he writes down with keyboard or pen ? Does everyone have to pass a requirement for belief to be able to post ? Alan Fox "Oh dear! Bill Dembski unknown by a commenter on (formerly) his own personal website." Why don't we discuss your Gay Porn website Alan and your published books on the homosexual lifestyle from the French Perspective. Let's talk about how these debates have nothing to do with science for you but rather your hatred of other's religious worldview which conflicts with your own. Explain to us how Science is a nothing more than a lazy man's post for you to lean on. DavidD
k, have you submitted your ID claims to any peer reviewed scientific publications? If not, why not? If your claims have scientific merit I would think that you would have confidence in them and would try hard to get them published. Astroman
KS, re bomb challenge cf here. KF kairosfocus
k, in addition to your usual huffing and puffing, you said: "Actually, atheists and the like can be design thinkers, and some are." Please name several or more atheists who accept "Intelligent Design". Can you name even one? k said: The design inference is not a worldview project..." Then why do you and other ID proponents promote ID as a worldview, and especially as a Christian ID-Creation worldview? Are you denying that you promote your Christian, Evangelical, Fundamentalist, inseparably joined ID-Creation worldview and everything that you include in it in your lengthy sermonical speeches here, on your blogs, and on other blogs? Astroman
Astroman said:
Do you believe that the ID mechanism is the existence, knowledge, and creative power of the Biblical God, which is also called the Abrahamic God?
No. I'm not a Christian nor of any Abrahamic faith or any organized religion whatsoever.
Also, if what you say were true, ID proponents wouldn’t be bringing religious references and arguments, and lots of them, to the table.
That people also argue about the religious/philosophical implications of a theory doesn't render the theory non-scientific. Even Darwinists bring religious and philosophical arguments to the table, such as the pernicious "god wouldn't do it that way" argument, which is a religious argument.
Based on your claims, will you please explain the Wedge document and agenda?
It's a political document that was drafted as a plan to use ID for social/political purposes. The fact that people use ID or Darwinism for the purposes of their social agenda (remember Darwinism-based eugenics?) doesn't mean ID isn't a legitimate scientific endeavor. William J Murray
William, if what you say is true, no ID proponent should be afraid to answer the following question with a yes or no: Do you believe that the ID mechanism is the existence, knowledge, and creative power of the Biblical God, which is also called the Abrahamic God? Also, if what you say were true, ID proponents wouldn't be bringing religious references and arguments, and lots of them, to the table. Based on your claims, will you please explain the Wedge document and agenda? Astroman
A: You have already tried the dishonesty projection gambit elsewhere; and have yet to do the decent thing when it was pointed out. I also took time to lay out for you a set of links to relevant discussions on such matters as the grounding of worldviews and why a reasonably informed person might have a theistic worldview. These matters are irrelevant to the inductive logic grounding of the design inference on empirically observable, reliable sign. KF PS: you are asking for a difference between theistic evolutionism and design thought. Actually, atheists and the like can be design thinkers, and some are. Maybe they are stoics or the like and see rationality as inherent in the world order. Pantheists and panentheists can be design thinkers, probably with some overlap to the stoics I just mentioned; of these, E of the Nile, there be billions. Theistic evolutionism can be linked to design thought, but in many cases today, tyere is a feeling that design is not evident from the observable phenomena of the world of life, it is apprehended at a higher level. The design inference is not a worldview project, it is a simple inductive logical inference, that there are features we may see in the world that on inductive investigation, point to design. For instance, functionally specific complex organisation and associated information, cf here for a discussion. And I really gotta run. kairosfocus
ID is a scientific theory that intelligence can leave identifiable, quantifiable evidence, such as CSI, which is not known to be produced by any other causal agency (other than intelligence), which leaves intelligent agency as the best explanation for the evidence in question. ID doesn't claim that the source of the evidence is God. Theistic evolution is the philosophical/religious view that religious teachings about god are compatible with biological evolution as science currently describes it. William J Murray
PPPS: I guess I have to at least note that I specifically spoke to moving islands, using the further simple metaphor of sandy barrier islands that change topography, shape and location; I need not go into 10^59+ dimensional spaces in a blog that in the end has to be reasonably communicative, just, an extended vector of length n can model an n dimensional space with n degrees of freedom, and 10^59+ ndimensions are not materially different from 3 or from the 100+ used to model a modern economy for input-output analysis, etc, and a memory space with 10^9 cells of 8 bits is a physical instantiation of a 10^9 dimensional space with 256 values for each element, equivalent to an 8 bn dimension space with two values for each element. Indeed, I spoke to moving islands of function in those barrier island terms years ago -- I come from a city with a tombolo across the mouth of its harbour that has played a key role in its history starting with the notorious 1692 quake that made the tombolo drastically and effectively instantly shift, followed by 300 years of rebuilding and another quake in 1907 that did interesting things. And I now live in an island drastically reshaped in a few years by a volcano, i.e. static terra firma is not part of my underlying understanding of the world. Such makes no fundamental difference to the search challenge to the point of being a confusing distractor, rather than a reasonable objection. And BTW, climate is a fiction, a moving average of weather across 33 years so trend is inherently embedded in that construct. kairosfocus
Since theistic evolution was mentioned, will you ID proponents please describe the differences between ID and theistic evolution? Astroman
Astroman, ID doesn't impose any a priori "knowledge" that god exists. ID doesn't impose any ideological preference on how the evolutionary evidence is sorted. What ID does is use abductive reasoning to reach the best inference given the evidence at hand, without insisting on ideological limitations for the conclusion. Darwinism holds a priori that unintelligent causal forces must be sufficient (refer to Lewontin) and thus all theories must include only "natural" (materialist) causes. ID brings no such bias to the table. It doesn't claim that intelligence must be part of the causal explanation; indeed, ID holds that unless the signature evidence of intelligence is above a fail-safe threshold, material explanations are the better causal explanation, even if that cause is currently unknown. ID does not insist that the source of said intelligence, if evidence is found, is "God". The a prior bias distinction is clear; Darwinists insist that all explanations in science maintain materialism; ID does not impose any such biased restriction. William J Murray
k said: "What is the independent basis for knowing that such exists apart from imposed a priori materialism that then presents the tree of life that embeds this as proof of it?" k, what is the independent basis for knowing that your supernatural God exists and is the designer and creator of the universe apart from imposed a priori religious beliefs that then present Christian ID-Creationist claims that embed this as proof of it? Or this version: k, what is the independent basis for knowing that Godly ID-Creation exists apart from imposed a priori religious beliefs that then present the ID inference that embeds this as proof of it? Or this version: k, what is the independent basis for knowing that Godly ID-Creation exists apart from imposed a priori religious beliefs that then present religious beliefs about Godly ID-Creation that embed this as proof of it? Or this version: k, what is the independent basis for knowing that Christian Godly ID-Creation exists apart from imposed a priori religious belief that then presents refuted claims about FSCO/I, irreducible complexity, probabilities, needle in haystack searches, extremely rough landscapes, islands of function, etc., that embed this as proof of it? I could come up with more versions but no matter how many I come up with I'm sure that you will miss the point. P.S. Why do you abbreviate some people's usernames? Astroman
keith said:
Logic is a function of our fallible human minds. If our minds are fallible, then our logic might be incorrect. If we can’t be absolutely certain that our logic is correct, then we can’t be absolutely certain of the conclusion of the argument. We can be quite certain that error exists, but we can’t be absolutely certain of it.
Apparently, keith thinks it is possible that 1+1=5 AND 1+1=2 AND 1+1=.234 AND 1+1=0, since he cannot be absolutely certain that error exists. One simply cannot argue with that kind of delusional thinking. William J Murray
PPS: Gotta go, I trust others will be able to deal with anything more above. A slice of the cake has all the ingredients in it. kairosfocus
KS, the pejorative strawmannising matter continues. Did you notice that I referred above to Hamming distance as extended, and to how the string of chained variables that WLOG describes a relevant config space immediately addresses multi-dimensionality? It seems that you need to address what a phase space is, and what degrees of freedom are, precisely huge multidimensionality, for small thermodynamic samples of the order 10^22 dimensions, and up; e.g. a simple ideal gas has three degrees of freedom per molecule, put a good fraction of a mole together and see where that gets ya. Recall, that is where I started from, statistical thermodynamics and the like. Using the 10^57 atoms as observers flipping and observing a tray of 500 coins each toy example to illustrate the search problem, that's 500 dimensions per tray of coins, with 10^57 trays. Explored at 10^57 flips, 10^14 times per second for 10^17 s. Either you hopelessly failed to interpret reasonably or you are playing at red herrings led away to strawmen. The further point is, that regardless of how such islands may change shape and migrate, the scope of the config space becomes so much greater than search resources on sol system scope or observed cosmos scope that the isolation of such islands makes blind search strategies maximally implausible to be successful. Where, that complex 3-d entities can be reduced to strings of Y/N q's structured per a protocol (what AutoCAD etc do to represent as files), analysis on digital binary strings is WLOG. Where, further, searches for golden searches face the point that a sample/search of a set of possibilities of cardinality, W, comes from the set of subsets, of cardinality 2^W. Searches for golden searches, S4GS, will be far more implausible than the already maximally implausible direct search for islands of function. The problem is not to move around within islands of function, but to find them in config spaces that dwarf the toy space I used that already shows that a search using up sol system resources [and spotting you the apparatus for 500 coin flips, scans etc] would be looking at a straw sized sample relative to a cubical haystack as thick as our galaxy at its central bulge. KF PS: Your logical error is revealing, first you seem absolutely sure of being prone to error. Next, you ducked the point that your conscious self-awareness is a first certainly known truth, and that in that context, rational contemplation confronts you with other certain truths Error exists, E is such that we may deny ~E, and join E AND ~ E which means that one or the other must be false leading to the immediate point, E. We may and do err but that does not mean that we have no points of certain knowledge of truth. Consicousness is one, existence of error is another, classic first principles of reason pivoting on the recognition of any distinct thing, A leading to a world partition W = { A | ~A } is another. The existence of numbers is necessarily true but such is not self evident as the process to get there is complex. And so forth. kairosfocus
KS, do the words policy + advisor mean anything to you? Why did you immediately jump to the assumption that I am merely interfering? That bit of conclusion jumping you just indulged may be one of the most revealing instances of attitude revealed by projection ever seen at UD. KF kairosfocus
Oh dear! Bill Dembski unknown by a commenter on (formerly) his own personal website. Alan Fox
@David Dembski is proponent of ID. He is as important as Behe . the bystander
keith s "Do you know who Dembski is? Do you think he’s a “Darwinist”?" Keith, I couldn't care less who Demski is or who or what he purports to be or what he believes. I find there are those who participate in many of these time wasting origins debates will often try and pacify evolutionists to by using such terms like micro-evolution etc to show good faith in attempting common ground with evolutionists. Unfortunately common ground to an evolutionist is that one must accept & believe everything that spews forth from their mouth or keyboard. So using such evolutionary terms is nothing more than shoe-horning in the actual word "evolution" itself into any origins debate discussion for which the typical evolutionist will never be satisfied with anyway. Perhaps you should ask someone who actually beliefs Theistic Evolution ? DavidD
DavidD, Do you know who Dembski is? Do you think he's a "Darwinist"? keith s
keith s "It isn’t a conscious search. It’s a search in the mathematical sense, in which points are selected (via mutation) from a search space and their fitness is evaluated." I love reading religious "Faith Affirmation" quotes DavidD
DavidD, It isn't a conscious search. It's a search in the mathematical sense, in which points are selected (via mutation) from a search space and their fitness is evaluated. Dembski understands this, even if you don't. His recent talk at the University of Chicago was titled "Conservation of Information in Evolutionary Search". keith s
As you can see, the 'islands of function' argument is dead in the water, so to speak. keith s
Thorton "Evolution doesn’t have to search the ginormous whole universe search space looking for innovations. All it does in each new generation is search the space immediately surrounding the existing working copy. If it finds a small improvement, it keeps it." Evolution is incapable of searching for anything, it's blind, undirected, unguided, unintelligent, etc, etc, etc reeeeemember ? Why is it that these religious Darwinist types keep sneaking in the concept of intelligence when their central religious dogma says it's unnecessary and actually forbids it's mention? Why the mention of intelligent creative concepts and then denying there is any such thing ? Why the lying for Darwin ? It's because evolution isn't about science. It never from the start has been, It's about promotion of yet another religious faith in a long line of faiths this planet has ever seen. The mismanagement of Earth's present and remaining natural resources prove they have no handle on the Science. They are truly Anti-Science. DavidD
Also, you're relying on Axe's enzyme experiment, which is deeply flawed. Here's how I explained it to vjtorley:
As for Axe, there is a huge problem with his experiment. He takes two related but highly dissimilar enzymes and tries to determine how many nucleotides would have to change to get from the first enzyme’s function to the second’s. This is bogus, because no one claims that the second enzyme evolved from the first, or vice-versa. They are related, but that doesn’t mean that one evolved from the other. All it means is that they have a common ancestor. For Axe’s experiment to be successful, he would have needed to demonstrate that the two enzymes couldn’t have evolved from a common ancestor.
And:
today I thought of a good analogy for people who don’t understand the technical details: It’s as if Axe is arguing that you can’t drive from Milwaukee to Detroit, because if you draw a straight line between the two and follow it, you’ll run straight into Lake Michigan. (Axe’s argument is actually worse than that, but why flog a dead horse?)
keith s
You also dodged my response to Allan Miller:
Allan Miller:
I’d add that the landscape is never static. The contours are perennially shifting as the environment changes (the environment, of course, being everything that impacts on gene persistence, not just the organism’s physical surroundings).
Let me expand on that for the benefit of our UD interlocutors. The fitness landscape faced by an organism can change a) as the physical surroundings change (e.g. climate change); b) as organisms move about their geographic ranges (e.g. a mountainside species experiencing different selective pressures at higher vs. lower elevations); c) as the species evolves (e.g. a subpopulation moving into a particular niche, pressuring others in the same species to exploit different niches); d) as other species evolve (e.g. predator-prey arms races); e) as the genome itself evolves (e.g. changes to one part of the genome altering the prospective fitness of changes to another part); f) as population sizes cycle (e.g. predator boom-bust cycles leading to the marvelous prime-number adaptation in cicada life cycles). So in addition to the other questionable or invalid assumptions made by the ‘islands of function’ folks, we need to add one more: the assumption that fitness landscapes are either static or that they change too little to allow populations to get unstranded. It’s a silly assumption, given all the ways in which fitness landscapes can change over time.
keith s
KF asks:
Notice the dodge?
I did. You dodged my point about the many dimensions of the fitness landscape:
Let me start the ball rolling by noting that real-life fitness landscapes have many dimensions, not just the three implied by the ‘islands of function’ metaphor. To see why this is important, let’s start with a two-dimensional landscape and build from there. In a two-dimensional landscape, height still represents fitness, but horizontal motion is limited to one dimension — a line, rather than a plane. Motion is limited to two directions, right and left. In such a landscape, a peak is any point that has lower points on both sides. There may be higher peaks (or equivalently, ‘islands’) further along the line, but you would have to move through the adjacent lower points to get to them. It’s easy to see how evolution could get stuck on a local peak/island. Now consider a three-dimensional landscape. Height represents fitness, as always, but horizontal motion can now range over two dimensions instead of one. You no longer have to go through the low points. You can potentially go around them, and you have many, many choices of paths, not just two. A peak is no longer defined as having lower points to the right and the left. It has to have lower points in all directions. Thus peaks are more exceptional in three dimensional space than they are in two. The trend continues. Each time you add a dimension, you exponentially increase the number of potential paths through the landscape. It becomes harder and harder to find a true peak, because a peak has to be surrounded by lower points not only in each dimension, but in every possible combination of each of the dimensions. By limiting their thinking to three dimensions, IDers drastically overestimate the likelihood of getting stuck on a local peak. Their intuition fails them. Real fitness landscapes have hundreds or thousands of dimensions, and the likelihood of getting stuck on a peak diminishes exponentially as the number of dimensions increases.
keith s
KF:
KS, all you have shown is that you are refusing to accept that the example of trying to deny that error exists automatically, necessarily creates an error, thus error MUST exist.
KF, The above is a logical argument in support of the claim that "error MUST exist". Logic is a function of our fallible human minds. If our minds are fallible, then our logic might be incorrect. If we can’t be absolutely certain that our logic is correct, then we can’t be absolutely certain of the conclusion of the argument. We can be quite certain that error exists, but we can't be absolutely certain of it. You claim to value logic. Logic demonstrates that absolutely certainty is unattainable (and yes, of course we cannot be absolutely certain of that.)You have persisted in this error for years, in the teeth of repeated correction. Please do better. keith s
Excerpt from the book: Darwin’s theory was a bit like that first movie of a galloping horse, revolutionary when compared to still photography, but only a modest step on the path to full-length feature films... The single most important question about evolution, the question that Darwin and generations of scientists after him did not, could not touch: How does nature bring forth the new, the better, the superior? You might be puzzled. Wasn’t that exactly Darwin’s great achievement, to understand that life evolved and to explain how? The biggest mystery about evolution eluded his theory. And he couldn’t even get close to solving it. the bystander
KF,
KS, As you know as I have stated in so many words, I have been exceptionally busy on policy matters for months [maybe it has not registered that there has been a change of govt here], with a current peak . . . as in there’s a new govt that has big challenges on its plate.
Then why not let the new government do its job? Why must you always try to insert yourself into its affairs? If they actually want your help, I'm sure they'll let you know. Let the government do its job, and then you'll have plenty of time to tackle The Bomb. keith s
Keith S So we can't certain about our certainty of being uncertain? Andre
KS, As you know as I have stated in so many words, I have been exceptionally busy on policy matters for months [maybe it has not registered that there has been a change of govt here], with a current peak . . . as in there's a new govt that has big challenges on its plate. Solow production functions, signatures of Kondratiev wave troughs in GDP data for the OECS region, sustainable structural transformation of economies through programme and project cycle management based strategic programmes, and the like. With a dose of capacity building and education system transformation as a side order. Not to mention Geothermal energy development and related renewables and alternatives, plus ports and town developments in the teeth of volcano ravages and long term consequences of policy errors. I have been brought to the point where I am now speaking about the project of tickling a dragon's tail, with particular reference to the challenges of investment to trigger and sustain growth with reasonable stability. It is only because I was noticing a problem here that I spoke for record and as you replied I engaged you step by step for a little while instead of what I should have done, go back to sleep. As far as I was able to glance at, your proposed bomb has failed and fizzled spectacularly, not like what would have happened if a brave scientist had not sacrificed his life to stop a chain reaction, taking the infamous blue flash and dying horribly of radiation nine days later. I am off to bed for some quick winks, and when I find reasonable time, I will look at what is reasonable to address. KF kairosfocus
KS, all you have shown is that you are refusing to accept that the example of trying to deny that error exists automatically, necessarily creates an error, thus error MUST exist. To be uncertain in the teeth of that sort of case, shows only that there is intransigent resistance to reason and evidence at work here. KF kairosfocus
KS: The post you link starts with a strawman:
Because the ID argument is a negative one
First, corrective arguments have their place, and second, the design inference is on what we know about the source of FSCO/I and is not a negative argument. Next, here is a key clip, strawman no 2:
What mysterious barrier do IDers think prevents microevolutionary change from accumulating until it becomes macroevolution? It’s the deep blue sea, metaphorically speaking. IDers contend that life occupies ‘islands of function’ separated by seas too broad to be bridged by evolution.
You don't get TO evolution until you first get to life as I pointed out in the darwinism challenge long since that has never been properly answered. So you are both making a strawman caricature here and already are begging the foundational question, origin of life in Darwin's pond or the like. Next:
For those who are familiar with fitness landscapes, a brief review. Imagine a three-dimensional landscape, similar to a terrestrial landscape. There are mountains and depressions, ridges and valleys, plains and plateaus. An organism occupies a particular spot on the landscape. Nearby spots represent organisms that are similar, but with slight changes. As you move further away from the spot, in any direction, the organisms represented become less and less like the original organism. Evolution can be visualized as a journey across such a landscape. Individual organisms don’t move, but their offspring may occupy different nearby spots on the landscape. So too for their offspring’s offspring, and so on. Thus successive generations trace out a path (or paths) on the fitness landscape as changes accumulate.
Notice the smuggled in continent of incremental development assumption? What is the independent basis for knowing that such exists apart from imposed a priori materialism that then presents the tree of life that embeds this as proof of it? Going on:
The idea, according to ID proponents, is that populations remain stranded on these islands of function. Some amount of microevolutionary change is possible, but only if it leaves you high and dry on the same island. Macroevolution is not possible, because that would require leaping from island to island, and evolution is incapable of such grand leaps. You’ll end up in the water. There is some truth to the ‘islands of function’ metaphor, but it also has some glaring shortcomings that ID proponents almost always overlook. I will mention some of the strengths and shortcomings in the comments, and I know that my fellow commenters will point out others.
Notice the dodge? We can fairly easily show that novel body plans will require genomes of incremental scope 10 - 100+ million bases, to account for new cell types, tissues, organs and associated regulatory networks etc, with the scope of exemplary multicellular animals etc pointing tot he upper end as more realistic. Even with a continent of beings, you would have to account for that much increment in organised functionally specific info, on realistic pops and timescales available. Mission impossible as is well known, 200 MY to fix a couple of muts on a line from chimp-like to human like comes to mind, and the pop vs changes for whales, hundreds to tens of thousands, is further notorious. So even if there were a continent of being, that would not answer the problem. But more tot he point, we re looking at the nature of FSCO/I, a LOT of complexly organised, specifically coupled parts to achieve function. Thus, acceptable configs are very tightly constrained, and will come in islands of function not continents. There is for example no smooth incremental path from See Spot Run or Hello World to a complex operating system or to War and Peace, or the like. Notoriously, major protein fold and function domains are deeply isolated and structurally unrelated in AA sequence space. The Hamming distance to be traversed to find the first island and onward to find others swamps the accessible atomic resources in our solar system or even the observed cosmos as a whole. Down in the comments you go on:
1. Axe claims that “Darwin’s engine moves in steps that can only reach points a tiny distance away from the prior point.” I suppose this depends on what he means by “tiny”, but the distances bridged by mechanisms such as frameshift mutations or recombination are arguably not tiny at all. 2. Axe claims that “further progress would require a still higher point to fall within reach once that move is made.” This is false. Evolution can also make moves that are neutral or even slightly deleterious. 3. Axe also fails to recognize a fact that we’ve been highlighting throughout this thread: The number of directions in which motion is possible increases exponentially with the number of dimensions in the fitness landscape. With so many directions to choose from, it is not surprising at all that evolution, operating across entire populations and long timespans, is able to find directions that lead to higher fitness.
Frame shifts are not going to get us to 10 - 100+ mn bases of further functional organisation involving novel proteins and families of fold-function. And the multidimensional scope is implicit in the WLOG use of strings to map co-ordinates in the relevant spaces. Hamming distance and extensions of that, are inherently multidimensional and underscore the point that depth of isolation hits home hard. It also highlights that large scope incremental changes within zones of function are also going to face huge challenges on pop size, generation spans and mut rates, which is the context of Axe's remarks. AM here adds little tot he point, but multiplies th cluds of misunderstanding that are likely indeed to obfuscate what should be plain:
He describes a kind of landscape that can exist – a static, rugged landscape, where populations can climb a smooth slope but cannot cross the many ‘downhill’ stretches due to the steepness of the slope leading into them. On empirical grounds, it bears little resemblance to the landscape that real DNA-organisms traverse. This landscape is ever-shifting. The environment – which includes every other gene in the genome, and every other evolving and migrating species in the ecosphere – changes continually. And ‘real’ fitness differentials tend to be rather small, rendering random drift a powerful force. Drift is much more ‘exploratory’ than NS, precisely because it is not anchored to peaks/islands. And most importantly, he completely forgets about recombination. This, if you could be bothered to write a GA and investigate the various parameters, provides a massive rate-change to ‘Darwin’s engine’, and brings points notionally ‘distant’ on a point-mutation scenario into very close contact.
Landscapes can play at being sand islands all they want, ever shifting and of variable geometry, that makes zero difference to isolation and the need to get to shores of function amidst seas of non-function. As for recombinations and sexual reproduction etc leaping seas form island to island, that is a flight of fantasy. We are talking 10 - 100+ million bases worth of innovation. A space of possibilities for 4^10 mn is like 8.19*10^6,020,599 possibilites, which so utterly swamps the possible search resources of the observable cosmos, 10^150 steps on an utterly generous scope of search of a search per atom for 10^80 atoms every 10^-45 s, that this is not reasonable. We could go on and on but the point is already adequately made. The search challenge is real, but has been strawmannised and dodged, not fairly faced. And the hoped for magic bullet links at TSZ fail. They only show how willful error mutually reinforces in the teeth of opportunity to correct. KF kairosfocus
KF, We've been through this before. I think "error exists" is true, but I can't be absolutely certain of it, and neither can you. This is easy to see. The conclusion rests on a logical argument. Logic is a function of our fallible human minds. If we can't be absolutely certain that our logic is correct, then we can't be absolutely certain of the conclusion. P.S. I notice that you still haven't attempted to defuse my bomb. Why are you leaving the dangerous work to the other pro-ID commenters? keith s
Excerpt from the book: LOL Here is an amazing experiment you can try at home. Put wheat in a container and seal the opening with dirty underwear. Wait twenty-one days, and mice will emerge. Not just newborn mice, but grown adult mice. At least that’s what the seventeenth-century physician and chemist Jan Baptista van Helmont reported.1 (He also revealed that scorpions would emerge from basil placed between two bricks and warmed by sunlight.) Van Helmont wasn’t the first to postulate the doctrine of spontaneous generation, which dates back at least to Aristotle, though he was among the last. the bystander
KS, you have a choice, when you make an assertion that implies the world IS like X, that is an assertion of truth, claiming to know, or implying or assuming that objective or even absolute knowledge -- warranted true belief -- is impossible or unattainable is an absolute knowledge claim, so it is running into self referential incoherence. Where as a counter example I put it to you, that Error exists is undeniably, certainly and absolutely true, as just one instance. Like unto it, it is certainly and undeniably known to you that you are a conscious entity; leading to a world of implications that overturn all sorts of popular modernist and/or post-/ultra- modernist notions and worldview claims. And I think this is an example of basic logic troubles for many objectors tot he design inference that have gone all the way back to exchanges over first principles of right reason. kairosfocus
KF, Your "islands of function" argument won't float: Things That IDers Don’t Understand, Part 2a – Evolution is not stranded on ‘islands of function’ keith s
PS: If you need a specific biological case in point of deeply isolated islands of function, try major clusters of proteins in AA space, where as GP has pointed out patiently here and at TSZ over and over for a very long time, the deep isolation of functional folds is notorious, with thousands of islands, about half of which have had to be in place from essentially OOL. PPS: Notice, the key concession a reviewer is making, as noted in the OP:
The question “how does nature innovate?” often elicits a succinct but unsatisfying response – random mutations. Andreas Wagner first illustrates why random mutations alone cannot be the cause of innovations – the search space for innovations, be it at the level of genes, protein, or metabolic reactions is too large that makes the probability of stumbling upon all the innovations needed to make a little fly (let alone humans) too low to have occurred within the time span the universe has been around.
That is the problem that Wagner is acknowledging and is not seeing that if there are laws of nature at work that inject huge quantities of info like that, then that points to fine tuning of the cosmos on steroids, implying design from the foundation of the cosmos, above and beyond the whole gamut of existing evidence of that. PPPS: Notice, K-S' inadvertent admission in comment no 2:
All it does in each new generation is search the space immediately surrounding the existing working copy. If it finds a small improvement, it keeps it.
So, either this is all about minor increments within islands of function (such as Finch beak oscillatory variations with drought/rainy weather cycles) or else there is a huge unwarranted assumption on a vast continent of functions from OOL up to us, traversible by an incremental tree of life. Exactly what there is no good reason to accept, starting from major protein fold and function domains. kairosfocus
logically_speaking,
Are you absolutely certain about that??
Of course not. Slow down and think. keith s
K-S, 49: While I have reason to doubt that you and your ilk will attend to a corrective point to the strawman tactic involved in the clip following, I will note for record on:
Evolution doesn’t have to search the ginormous whole universe search space looking for innovations. All it does in each new generation is search the space immediately surrounding the existing working copy. If it finds a small improvement, it keeps it.
1 --> First this pivots on refusing to address the root of the Darwinist tree of life, origin of life in Darwin's pond or the like where self replication is not available to you to start with and needs to be empirically justified. You cannot get to assuming a gated, encapsulated metabolic, code using, ribosome using,von Neumann self replicating life form you have to show how to get there. 2 : This puts design at the table from the outset. 3 : Next, as you full well know or should know [AYFWKoSK], the nature of FSCO/I is that it naturally comes in deeply isolated islands of function (this term has been used as a metaphor for probably over a decade). This is because you need many, correct, correctly coupled and organised parts to achieve specific relevant function. And soon, a bit of noise perturbation leads to disorganisation and/or gibberish, thus off into the vast sea of non-function with no handy hill gradient from differential function – aka fitness function – to help point you neatly uphill. (I won't bother to address the evidence that real fitness functions seem to be extraordinarily rough and ill-behaved for hill climbing.) 4: Now, a look at the clip shows the focus on incrementalism, which requires that the topology of functional life forms is a vast continent of incrementally sloping improvements from FUCA or LUCA to us and the dozens of major body plan forms on the branches. 5: Where is the actually observed evidence for that? Nowhere, this is an assumption imposed by the back door, and implicitly demanded by a priori imposed evolutionary materialism. 6: Yes, we are perfectly happy to grant incremental improvements within islands of function, but that was never the issue, that was to FIND islands of function on a drastically limited search resource relative to search space, given even a toy case like using the 10^57 atoms of our solar system as observers of new tries every 10^-14 s for 10^17 s. The result is, that we have a needle sized blind search of a config space comparable to a cubical yardstick as thick as our galaxy's central bulge. 7: So, following the TSZ et al partyline dismissal argument regarding FSCO/I involves willfully substituting a strawman caricature for the real issue, finding islands of function in vast config spaces. 8: Which pivotal issue has been patiently, exhaustively, pointed out and explained in one way or another over and over and over for as far back as WmAD in NFL. AYFWKoSK. 9: The interested onlooker is invited to read here, as just a fairly recent explanation of this. If he reads here on in context he can see a 101 that has been on the table for years, but of course has been studiously ignored. 10: Let me clip just the first of these, DDD no 8, on the all too predictable incrementalism talking point:
But, it will be typically objected, we only need to make incremental changes to cumulatively climb the hill of fitness! In fact the challenge as shown [there is an infographic] dominates, one needs to first find the islands of function in the relevant config space, without oracles telling you warmer/colder, as there is no functional feedback until one is actually within such an island as T. Where also, the requirement of multiple, well-matched, properly arranged and coupled component parts to achieve function . . . will confine one to relatively tiny fractions of the space of possible configurations [--> that is, to islands of function deeply isolated in the space of possibilities, to achieve relevant function] . . . . In the case of a solar system scale search, we have some 10^57 atoms, interacting across 10^17 s, with an upper reasonable rate of 10^14 interactions per second. If we were to give each such atom a tray of 500 ordinary H/T coins and if they were flipped and “read” every 10^-14 s, we would be able to sample only a small, almost infinitesimal fraction of the space of possibilities of 500 coins. A picture of the challenge would be that if the samples were comparable in scope to a straw, the config space would be a cubical haystack 1,000 Light years across . . .
. . . likewise, to show how there is no excuse, here is Dembski in NFL, at the turn of the 2,000s:
p. 148: “The great myth of contemporary evolutionary biology is that the information needed to explain complex biological structures can be purchased without intelligence. My aim throughout this book is to dispel that myth . . . . Eigen and his colleagues must have something else in mind besides information simpliciter when they describe the origin of information as the central problem of biology. I submit that what they have in mind is specified complexity [ --> cf. here], or what equivalently we have been calling in this Chapter Complex Specified information or CSI . . . . Biological specification always refers to function . . . In virtue of their function [[a living organism's subsystems] embody patterns that are objectively given and can be identified independently of the systems that embody them. Hence these systems are specified in the sense required by the complexity-specificity criterion . . . the specification can be cashed out in any number of ways [[through observing the requisites of functional organisation within the cell, or in organs and tissues or at the level of the organism as a whole] . . .” p. 144: [[Specified complexity can be defined:] “. . . since a universal probability bound of 1 [[chance] in 10^150 corresponds to a universal complexity bound of 500 bits of information, [[the cluster] (T, E) constitutes CSI because T [[ effectively the target hot zone in the field of possibilities] subsumes E [[ effectively the observed event from that field], T is detachable from E, and and T measures at least 500 bits of information . . . ” [--> in short, the whole context is deeply isolated zones T which in biology are specified on function, which then confront the scope of resources available with a supertask to search, leading to the needle in haystack challenge to find islands of function]
. . . I then went on to note:
In short, to be meaningful or functional, a correct set of core components have to match and must be properly arranged, and while there may be some room to vary, it is not true that just any part popped in in any number of ways can fit in. The flow-through unidirectional flow lungs we commonly see in birds provide a biological example of this effect (and of the implied challenges to incremental evolution based on small random changes that must provide functional advantages in ecological niches in order to be fixed in a viable population). In these animals, two sets of inflatable sacs are used to pump and pull air through the lungs, which is different from the more familiar bellows type lung such as we have. As Michael Denton observed in his epochal 1985 Evolution, a Theory in Crisis:
[[T]he structure of the lung in birds and the overall functioning of the respiratory system is quite unique. No lung in any other vertebrate species is known which in any way approaches the avian system. Moreover, it is identical in all essential details in birds as diverse as humming birds, ostriches and hawks . . . . Just how such an utterly different respiratory system could have evolved gradually from the standard vertebrate design is fantastically difficult to envisage, especially bearing in mind that the maintenance of respiratory function is absolutely vital to the life of an organism to the extent that the slightest malfunction leads to death within minutes. Just as the feather cannot function as an organ of flight until the hooks and barbules are coadapted to fit together perfectly, so the avian lung cannot function as an organ of respiration until the parabronchi system which permeates it and the air sac system which guarantees the parabronchi their air supply are both highly developed and able to function together in a perfectly integrated manner . . . [[Evolution, a Theory in Crisis, 1985, pp. 210 - 12.]
In short, we see here a case of an island of irreducibly complex function, on an organ that is literally vital, and that irreducible complexity would arguably block incremental evolution: intermediates between a bellows lung and a bird's flow-through lung, would be most likely lethally defective -- and would at the very least be arguably dis-advantageous -- and so would be selected against by the very same natural selection that is so often appealed to. For, without the right components -- properly arranged and integrated and with the nervous control system and integrated blood circulatory system and muscular systems -- the bird would most likely die within minutes. In short, the way functionally specific complex organisation leads to islands of function in wider configuration spaces is highly relevant to major biological systems, not just technological ones. [--> BTW, while this is a case of IC as well, the FSCO/I argument does NOT rely on IC in general. They happen to frequently come together.] As a direct result, in our general experience, and observation, if the functional result is complex and specific enough, the most likely cause is intelligent choice, or design. This has a consequence. For, this need for choosing and correctly arranging then hooking up correct, matching parts in a specific pattern implicitly rules out the vast majority of possibilities and leads to the concept of islands of function in a vast sea of possible but meaningless and/or non-functional configurations. And, arguably to design -- the commonly observed cause of FSCO/I -- as the best explanation for such cases. So also, if you would dispute the point that such islands of function dependent on specific clusters of combinations of particular parts exist in seas of non-function, as a typical and even reliably observable pattern, it is necessary to support that claim by observed example. That is, show a case where by blind chance and equally blind mechanical necessity, complex functional organisation emerges from non-functional arrangements, and grows in complexity and degree of successful operation from one step to the next; with particular reference to the rise of new major body plans in life forms. Variations and adaptations within existing body plans do not answer to this. That is, the challenge is to get to shorelines of islands of function in seas of non-function, or else to show that here is a vast continent of function that can be incrementally accessed through a branching tree of life. On fair comment, despite the various lines of evidence and the many headlined icons of evolution that are put forth to make Darwinian evolutionary mechanisms seem plausible, this challenge has not been met after over 150 years of trying. Consequently, it is equally fair comment to observe that such functionally specific, complex organisation and associated information have only one empirically observed, adequate cause: purposeful, intelligently directed configuration, i.e. design. Therefore, design theorists argue that the world of life points on such empirically reliable signs to design as a key causal factor in the origins of life as we see and experience it. But, in turn, that has to be shown, not simply asserted . . .
. . . all there a few known clicks away for years. 11: There are many, many examples of the same basic correction going back across years, just they have all been willfully ignored. So, it is plain that considering the penumbra of objectors and their sites as a whole, we are dealing with deliberately and insistently maintained strawman tactic DDD stuff, not a serious and cogent argument. 12: Which inadvertently implies that he design inference on FSCO/I is quite strongly warranted, if to reject it, objectors are forced to first strawmannise it. KF kairosfocus
Keith S, "The fact that we can’t be absolutely certain does not mean that our thoughts can’t be trusted. It only means that they can’t be trusted absolutely". Are you absolutely certain about that?? logically_speaking
wd400, Ok, so we have established that you side with Keiths, and disagree with Thorton that Wagner is a crackpot who introduced a new law of nature. Ok, we got that. Thorton is the crackpot. Now, since you don't think that Wagner is introducing anything new, and that we have know for a while that random mutations can not account for all the diversity we see in life, What is the definition of the Theory of Evolution? phoodoo
And if you are still struggling to understand I'll give it to you in plain English Don't shit where you eat. Andre
Thornton, You broke the truce by insulting Denyse. Don't call me a liar your comment is there its not gone and available for all to see, it is clear that your materialist tendencies have rend red you incapable of displaying any good manners. Andre
Thornton Should you not be wholly immersed with your own blessid moral development so that you can lead a good life? Why worry about ours? Andre
Andre
Thornton You insulted Denyse, for that good riddance……
So much for the truce. Less than one day and Andre goes right back to his lies and insults. I guess it's true a leopard can't change his spots or a skunk his stink. Enjoy your incestual group-grope with your buddies while the scientific world laughs and passes you by. Don't forget that inbreeding produces idiots. Thorton
But Thornton why do you comment here at all? If we are deluded why are you trying to convert us? In the greater scheme of things what does it matter? Universe come and go, galaxies come and go solar systems come and go plainest come and go and life come and go. What are you trying to save us from I it really does not matter? Andre
Why is Thorton so stupid? That's what I want to know. Mapou
Fair enough Keith but you brought it up.... by taking a stab at me having to go sleep.... Andre
It isn't the middle of the night, Andre. There are places other than the East Coast in America, you know. keith s
Thornton You insulted Denyse, for that good riddance...... You don't get invited to someone's house and then crap on their couch and get to complain about the smell. Serves you right. Andre
It is morning 07:48 to be exact, you know there are are places in the world other than America right? The earth is round and spins on its own axis giving us 24 hours split in dark and light. "Let us seperate light from darkness" But why are you arguing about things that we can't know in the middle of the night? Should you not be spending your night time next to loved ones? Andre
Andre
And Thornton was banned for insulting Dense, serves him right, I’d say good riddance because he was searching for truth as much as a shark desires to become a vegeterian
Thank you Andre for keeping your word about no more insults. Did you figure I couldn't reply so you took a last cheap shot? Mighty Christian of you. For the record I never insulted Denyse. My comment was directed solely to the author of the comment I quoted. Not to Denyse, not to anyone else at UD. To be honest I'm not surprised the censorship and banning have started again. ID-Creationism has never been able to stand up to the slightest scientific scrutiny. The only way you guys ever feel good about yourselves is hiding here in your protected little cubby hole telling each other how smart you are. I see now why you and the others won't venture out to places like TSZ or ATBC. You wouldn't last a week on a board with scientifically knowledgeable posters expecting you to defend the ID Creationist idiocy. Thorton
Andre, Perhaps if you sleep on it you'll be less confused in the morning. keith s
When I said that individuals are more important I mean to say is this; For the theist individuals are more important because they were created by God and will live eternally. For the materialist the individual only lives for about 70 years and that groups, nations and civilazations are more important. That is the one big difference in our policy, and it shows in our practical proposals. Andre
Keith S Why are you qouting words form a book of a God that don't exist? Secondly you did not understand what I was pointing out. Lastly I can't believe you because you don't even believe yourself. Andre
Keith S I have and since you've concluded that we can't know that we can know why should I believe what you say? Andre
Andre:
There is the difference between the materialist and theist right there…….. For the materialist groups are more important than individuals but for the theist individuals are more important, this one big difference runs through their whole policy.
It runs through their whole policy, eh?
All the believers were together and had everything in common. They sold property and possessions to give to anyone who had need. Every day they continued to meet together in the temple courts. They broke bread in their homes and ate together with glad and sincere hearts, praising God and enjoying the favor of all the people. Acts 2:44-47, NIV
And:
All the believers were one in heart and mind. No one claimed that any of their possessions was their own, but they shared everything they had. With great power the apostles continued to testify to the resurrection of the Lord Jesus. And God’s grace was so powerfully at work in them all that there were no needy persons among them. For from time to time those who owned land or houses sold them, brought the money from the sales and put it at the apostles’ feet, and it was distributed to anyone who had need. Acts 4:32-35, NIV
keith s
Andre, Slow down and think it through. keith s
Believe* Andre
Keith S I know for a fact that you don't be live yourself....... We can't know that we can know? Seriously you are a voice of reason? Thanks for the laugh buddy, this Saturday is gonna be great because now I know with absolute certainty that you are 100% foolish. Andre
Phoodoo, Wagner is a very fine scientist. What he says is interesting, but not terribly new. wd400
Forgive my ignorance but was Origin of the species ever peer-reviewed? Andre
Andre:
You wrote an entire article about the fact that we can’t be certain.
Yes. Now concentrate, Andre: The fact that we can't be absolutely certain does not mean that our thoughts can't be trusted. It only means that they can't be trusted absolutely. Slow down and think it through. keith s
wd400, I have no idea what you are trying to saying. Are you on the side of Keiths, who says that what Wagner is writing is nothing new, or do you side with Thorton who wants to claim that Wagner is just another crackpot, who can't get anything printed that is peer reviewed? phoodoo
WD400 There is the difference between the materialist and theist right there........ For the materialist groups are more important than individuals but for the theist individuals are more important, this one big difference runs through their whole policy. Andre
Phoodo, It's a "real" trajectory in the sense that it's a path through space over time, if you are willing to call the space of all genotypes a "space". Jerry, Recombination is indeed basic genetics, but it also relevant to evolutionary biology. The tension between recombination and selection, for instance, is the most important question to understanding speciation. Mung's mistake above, where he seems to treat each individual as a seperate entity unlinked to others is another example or the concepts relevance. wd400
Keith S You wrote an entire article about the fact that we can't be certain. Or did you mean that only applies to other delusional fools and your chemical reactions are the only rational ones? How d we test that Keith S? How do we check if your chemical reactions are true? Andre
Andre writes:
We can all safely discard anything Keith S has to say he admitted yesterday that we can’t really trust the chemical reactions in his head.
Of course I said no such thing. You better hurry, Andre. Mung and Cantor are way ahead of you in the race to the bottom. keith s
How can we trust anything you have to say Keith S? You really don't seem to understand Andreas Wagner is pointing out that it is non-random evolution....... You know the one William Wallace developed? But Keith won't see that because he is blinded by his I am angry at God campaign...... Andre
Cantor, You haven't actually anything. All you've done is made the vaguest sneer, then presented you've made a substantial rebuttal. Frankly, you look faintly ridiculous as this point. If you want to establish why you Thorton's comment was wrong, but all means do. wd400
Vishnu,
You made the accusation that Mung was preferring the announcement contents over the book,
No. Here's what I actually wrote:
All you have to do is read the interview to see that the book announcement is hype.
But since you insist on being spoon-fed, open wide for the choo-choo train. I quoted the interview with Wagner, and wrote:
There is nothing anti-Darwinian about Wagner’s thesis. DNA changes randomly, as he stresses. It’s nothing but selection working on random mutations.
Mung quoted the book announcement, and wrote:
That’s about as non-darwinian as you can get.
I went by Wagner's words, and Mung went by the book announcement. You foolishly bought into Mung's tall tale. keith s
I hope the author is not terrorized to change his stand. His project fundings mught be stopped, his membership may be cancelled and all manner of pressure may be put so he will start giving more interviews claiming the book is actually in support of Darwin and Darwin theory predicts all these new laws Me_Think
Andreas Wagner:
he arrival of the fittest here simply means how new traits originate. For example, there is this interesting fish called the winter flounder, which lives close to the Arctic Circle, in very deep, cold waters—so cold that our body fluids would freeze solid. Yet this fish survives there. It turns out that its ancestors discovered a new class of antifreeze proteins that work a bit similar to the antifreeze in your car.
This quote from Andreas Wagner wrong on so many fronts why would anyone need, or want, to read anything further of what her rights let alone take anything he states seriously? franklin
Denyse* Andre
We can all safely discard anything Keith S has to say he admitted yesterday that we can't really trust the chemical reactions in his head. As for Astroman...... Troll alert. And Thornton was banned for insulting Dense, serves him right, I'd say good riddance because he was searching for truth as much as a shark desires to become a vegeterian. Andre
Keiths: Apparently Vishnu, like Mung, can’t tell the difference between a book and a book announcement.
You made the accusation that Mung was preferring the announcement contents over the book, all the while trying to get you to read the book (instead of merely judging it by the blurb.) Re-read the posts. It's hard to believe you're that dense. But then again, I have my suspicions about your motives. BTW, have you read the book? Vishnu
Thorton is arguing that the author is a crackpot who is claiming to have discovered an entirely new law of nature. Keiths is arguing that you would have to be a crackpot to think this guy is claiming any new law of nature. And meanwhile there still is no theory of evolution. Perhaps we should just let thorton and Keiths argue it out amongst themselves. But they aren't enough smart enough to realize they are completely disagreeing with each other. At least Thorton has found someone else to call a crackpot-imagine that. phoodoo
Apparently Vishnu, like Mung, can't tell the difference between a book and a book announcement. keith s
I'm not confused, cantor. I'm calling your bluff. Let's see you squirm a little more. Tell us how operations research invalidates Thorton’s statement:
Evolution doesn’t have to search the ginormous whole universe search space looking for innovations. All it does in each new generation is search the space immediately surrounding the existing working copy. If it finds a small improvement, it keeps it.
After all, you said:
That makes a lot of sense… if you don’t think about it. Do you have any background whatsoever in the field of mathematics known as operations research?
Surely you're not foolish enough to make a statement that you can't back up, hoping to bluff your opponent into folding. Are you, cantor? keith s
keiths: You were arguing that a book announcement written by some marketing person should take precedence over Wagner’s own words. Not a smart move. mung: And you reached this conclusion from my statements that reading the book [i.e., reading Wagner's own words] should take precedence?
Astonishing. I'm starting to suspect that keiths doesn't believe anything he says, but is here only to see how much of his B.S. the ops will put up with. And, so long, Thornton. The comedy was good for a short time. But then got so played so quickly. Time to write some new material. Go to clown school. I think that just might be the ticket for you. Vishnu
61 keith s October 31, 2014 at 7:18 pm Yes, cantor, please “dumb it down” for us.
"Us". That's rich. Go read posts 39 and 40. If you disagree or don't understand, state clearly what is confusing you. cantor
A Reader Asks: Can Microevolutionary Changes Add Up to Macroevolutionary Change? Casey Luskin - October 31, 2014 http://www.evolutionnews.org/2014/10/a_reader_asks_c090811.html bornagain77
As was pointed out previously, the darwinian assumption of unlimited plasticity is contradicted by empirical observation. Fairly severe constaints for unlimited plasticity are found for micro-organisms and even for individual proteins.
Mutations : when benefits level off - June 2011 - (Lenski's e-coli after 50,000 generations) Excerpt: After having identified the first five beneficial mutations combined successively and spontaneously in the bacterial population, the scientists generated, from the ancestral bacterial strain, 32 mutant strains exhibiting all of the possible combinations of each of these five mutations. They then noted that the benefit linked to the simultaneous presence of five mutations was less than the sum of the individual benefits conferred by each mutation individually. http://www2.cnrs.fr/en/1867.htm?theme1=7 Testing Evolution in the Lab With Biologic Institute's Ann Gauger - podcast with link to peer-reviewed paper Excerpt: Dr. Gauger experimentally tested two-step adaptive paths that should have been within easy reach for bacterial populations. Listen in and learn what Dr. Gauger was surprised to find as she discusses the implications of these experiments for Darwinian evolution. Dr. Gauger's paper, "Reductive Evolution Can Prevent Populations from Taking Simple Adaptive Paths to High Fitness,". http://intelligentdesign.podomatic.com/entry/2010-05-10T15_24_13-07_00 When Theory and Experiment Collide — April 16th, 2011 by Douglas Axe Excerpt: Based on our experimental observations and on calculations we made using a published population model [3], we estimated that Darwin’s mechanism would need a truly staggering amount of time—a trillion trillion years or more—to accomplish the seemingly subtle change in enzyme function that we studied. http://www.biologicinstitute.org/post/18022460402/when-theory-and-experiment-collide Corticosteroid Receptors in Vertebrates: Luck or Design? - Ann Gauger - October 11, 2011 Excerpt: if merely changing binding preferences is hard, even when you start with the right ancestral form, then converting an enzyme to a new function is completely beyond the reach of unguided evolution, no matter where you start. http://www.evolutionnews.org/2011/10/luck_or_design051801.html
The reason for this severe constraint on proteins transforming into different proteins of a new function is because of what is termed 'context dependency'.
(A Reply To PZ Myers) Estimating the Probability of Functional Biological Proteins? Kirk Durston , Ph.D. Biophysics - 2012 Excerpt (Page 4): The Probabilities Get Worse This measure of functional information (for the RecA protein) is good as a first pass estimate, but the situation is actually far worse for an evolutionary search. In the method described above and as noted in our paper, each site in an amino acid protein sequence is assumed to be independent of all other sites in the sequence. In reality, we know that this is not the case. There are numerous sites in the sequence that are mutually interdependent with other sites somewhere else in the sequence. A more recent paper shows how these interdependencies can be located within multiple sequence alignments.[6] These interdependencies greatly reduce the number of possible functional protein sequences by many orders of magnitude which, in turn, reduce the probabilities by many orders of magnitude as well. In other words, the numbers we obtained for RecA above are exceedingly generous; the actual situation is far worse for an evolutionary search. http://powertochange.com/wp-content/uploads/2012/11/Devious-Distortions-Durston-or-Myers_.pdf "Why Proteins Aren't Easily Recombined, Part 2" - Ann Gauger - May 2012 Excerpt: "So we have context-dependent effects on protein function at the level of primary sequence, secondary structure, and tertiary (domain-level) structure. This does not bode well for successful, random recombination of bits of sequence into functional, stable protein folds, or even for domain-level recombinations where significant interaction is required." http://www.biologicinstitute.org/post/23170843182/why-proteins-arent-easily-recombined-part-2
That the amino acids in proteins are mutually interdependent, i.e. context dependent, is fairly easy to demostrate,,,, proteins have now been shown to have a ‘Cruise Control’ mechanism, which works to ‘self-correct’ the integrity of the protein structure and function from any random mutations imposed on them.
Proteins with cruise control provide new perspective: "A mathematical analysis of the experiments showed that the proteins themselves acted to correct any imbalance imposed on them through artificial mutations and restored the chain to working order." http://www.princeton.edu/main/news/archive/S22/60/95O56/
Thus the entire protein operates as a cohesive whole in its specific function! How is it possible for a protein to operate as a cohesive whole despite mutations to individual amino acids? The reason is that the entire protein is 'quantumly entangled' as as a single quantum state,,,
Coherent Intrachain energy migration at room temperature - Elisabetta Collini & Gregory Scholes - University of Toronto - Science, 323, (2009), pp. 369-73 Excerpt: The authors conducted an experiment to observe quantum coherence dynamics in relation to energy transfer. The experiment, conducted at room temperature, examined chain conformations, such as those found in the proteins of living cells. Neighbouring molecules along the backbone of a protein chain were seen to have coherent energy transfer. Where this happens quantum decoherence (the underlying tendency to loss of coherence due to interaction with the environment) is able to be resisted, and the evolution of the system remains entangled as a single quantum state. http://www.scimednet.org/quantum-coherence-living-cells-and-protein/
Thus, very much contrary to Darwinian presuppositions, we find that proteins are not searching for new functions when thay have mutation occur to them, but we find instead that proteins are operating a cohesive whole in its specific function. Moreover, the entire protein is involved in resisting the effects of mutations on amino acids within the protein sequence. A few more notes on protein non-evolvability:
Stability effects of mutations and protein evolvability. October 2009 Excerpt: The accepted paradigm that proteins can tolerate nearly any amino acid substitution has been replaced by the view that the deleterious effects of mutations, and especially their tendency to undermine the thermodynamic and kinetic stability of protein, is a major constraint on protein evolvability,, http://www.ncbi.nlm.nih.gov/pubmed/19765975 “Mutations are rare phenomena, and a simultaneous change of even two amino acid residues in one protein is totally unlikely. One could think, for instance, that by constantly changing amino acids one by one, it will eventually be possible to change the entire sequence substantially… These minor changes, however, are bound to eventually result in a situation in which the enzyme has ceased to perform its previous function but has not yet begun its ‘new duties’. It is at this point it will be destroyed” Maxim D. Frank-Kamenetski, Unraveling DNA, 1997, p. 72. (Professor at Brown U. Center for Advanced Biotechnology and Biomedical Engineering)
bornagain77
Apart from the mixed up last sentence. Recombination means there really is a single trajectory shared by all members of a population.
Buy all this has ever done is produce trivial changes to the gene pool. It is modern day genetics. It is not part of the evolution debate. jerry
"Recombination means there really is a single trajectory shared by all members of a population." Is that appearance of trajectory, or real trajectory? http://en.m.wikipedia.org/wiki/Trajectory ppolish
wd400:
Apart from the mixed up last sentence. Recombination means there really is a single trajectory shared by all members of a population.
Thank God there's only a single population sharing a single genome. Or not. Mung
keiths:
you were arguing that a book announcement written by some marketing person should take precedence over Wagner’s own words. Not a smart move.
And you reached this conclusion from my statements that reading the book [i.e., reading Wagner's own words] should take precedence? keiths:
Perhaps keiths, unlike Mung, can tell the difference between a book announcement and the book itself.
Meanwhile, we're waiting for you to actually read the book before declaring its content... waiting ... keiths:
IDers, I am really glad that Mung is on your side, not mine.
No one here knows what side you're on, nor why, so that's not saying much. Mung
This is funny. keith s
Yes, cantor, please "dumb it down" for us. Tell us exactly how operations research invalidates Thorton’s statement:
Evolution doesn’t have to search the ginormous whole universe search space looking for innovations. All it does in each new generation is search the space immediately surrounding the existing working copy. If it finds a small improvement, it keeps it.
keith s
Mung,
And here I thought I was arguing that people should read the book.
No, you were arguing that a book announcement written by some marketing person should take precedence over Wagner's own words. Not a smart move.
Perhaps keiths has some evidence that the words in the book are not the words of Andreas Wagner.
Perhaps keiths, unlike Mung, can tell the difference between a book announcement and the book itself. IDers, I am really glad that Mung is on your side, not mine. Besides Joe, is there anyone less competent posting on the ID side? keith s
https://uncommondesc.wpengine.com/intelligent-design/evolution-driven-by-laws-not-random-mutations/#comment-524239 I already answered that question in post 39. Tell me what part you don't understand and I'll try my best to dumb it down a bit more for you. cantor
Mung. Yup. Apart from the mixed up last sentence. Recombination means there really is a single trajectory shared by all members of a population. wd400
Ginormous number searching their own little spaces. Searching for the same thing. Most find it. "Seek and you shall find" ppolish
wd400, really? Mung
keiths:
Poor Mung is trying to argue that we should ignore Andreas Wagner’s own words and focus instead on marketing hype written by someone else.
And here I thought I was arguing that people should read the book. keiths cries "marketing hype!" I say, read the book. Perhaps keiths has some evidence that the words in the book are not the words of Andreas Wagner. I think it's a safe bet that keiths hasn't read the book. Mung
I find the first few seconds of this to be a fun, but accurate, metaphor for evolution. Like many organisms, the object portrayed is designed to work one way, but can transform, within predetermined constraints and intents, into a lesser model when circumstances change. It's still within the genus of vehicle and that's all it will ever be. Because that's what it was designed to be. bb
Poor Mung is trying to argue that we should ignore Andreas Wagner's own words and focus instead on marketing hype written by someone else. keith s
keiths hasn't read the book, doesn't need to read the book. Just for my own personal edification, keiths, have you read either of the books by Stephen Meyer? Mung
Mung:
Well darn. Of course keiths has read the book and knows the book announcement is mere hype. right keiths?
Mung, All you have to do is read the interview to see that the book announcement is hype. However, it will require more than your customary 20 seconds of sustained attention. keith s
Neither of you are believable. If there was only a single genome you might have a point. But there isn’t. Recombination means there sort of is, actually. Lots of little variant copies can go our searching teh space, and recombination will bring those variants that do well are likely to be brought back together in the future. wd400
It's amusing to see cantor dodging the question. Here it is again, pseudo-Georg:
Prove me wrong by showing how operations research invalidates Thorton’s statement:
Evolution doesn’t have to search the ginormous whole universe search space looking for innovations. All it does in each new generation is search the space immediately surrounding the existing working copy. If it finds a small improvement, it keeps it.
keith s
God Is the Best Explanation For Why Anything At All Exists – William Lane Craig – video http://www.youtube.com/watch?v=TjuqBxg_5mA Aquinas’ Third way (argument from existence) – video http://www.youtube.com/watch?v=V030hvnX5a4 Aquinas’ First Way – (The First Mover – Unmoved Mover argument) – video http://www.youtube.com/watch?v=Qmpw0_w27As “The ‘First Mover’ is necessary for change occurring at each moment.” Michael Egnor – Aquinas’ First Way http://www.evolutionnews.org/2009/09/jerry_coyne_and_aquinas_first.html
As to the ancient first mover argument of Aquinas in particular, the double slit experiment is excellent for illustrating that the ‘unmoved mover’ argument is empirically valid. In the following video Anton Zeilinger, whose group is arguably the best group of experimentalists in quantum physics today, ‘tries’ to explain the double slit experiment to Morgan Freeman:
Quantum Mechanics – Double Slit Experiment. Is anything real? (Prof. Anton Zeilinger) – video http://www.youtube.com/watch?v=ayvbKafw2g0
Prof. Zeilinger makes this rather startling statement in the preceding video that meshes perfectly with the ‘first mover argument’::
“The path taken by the photon is not an element of reality. We are not allowed to talk about the photon passing through this or this slit. Neither are we allowed to say the photon passes through both slits. All this kind of language is not applicable.” Anton Zeilinger
If that was not enough to get Dr. Zeilinger’s point across, at the 4:12 minute mark in this following video,,,
Double Slit Experiment – Explained By Prof Anton Zeilinger – video http://www.metacafe.com/watch/6101627/
Professor Zeilinger states,,,
“We know what the particle is doing at the source when it is created. We know what it is doing at the detector when it is registered. But we do not know what it is doing in-between.” Anton Zeilinger
or as Dr. Egnor succintly put the argument,
“The ‘First Mover’ is necessary for change occurring at each moment.” - Michael Egnor
Supplemental quote:
“Joel Primack, a cosmologist at the University of California, Santa Cruz, once posed an interesting question to the physicist Neil Turok: “What is it that makes the electrons continue to follow the laws.” Turok was surprised by the question; he recognized its force. Something seems to compel physical objects to obey the laws of nature, and what makes this observation odd is just that neither compulsion nor obedience are physical ideas.,,, Physicists since Einstein have tried to see in the laws of nature a formal structure that would allow them to say to themselves, “Ah, that is why they are true,” and they have failed.” Berlinski, The Devil’s Delusion pg. 132-133
Verse and Music:
Acts 17:28 For in him we live and move and have our being.’ As some of your own poets have said, ‘We are his offspring.’ Britt Nicole - Gold https://www.youtube.com/watch?v=p9PjrtcHJPo
bornagain77
So now we have an evolutionist searching for 'hidden laws' so as to explain how 'wings, eyeballs, knees, camouflage, lactose digestion, photosynthesis, and the rest of nature’s creative marvels', came to be? HMMMM,,, Unfortunately for him 'laws', whether they be hidden laws or in your face laws, have never 'caused' anything to happen in this universe. The following Professor, who is a former atheist, gets this point across more clearly than anyone else I have heard:
A Professor's Journey out of Nihilism: Why I am not an Atheist - University of Wyoming - J. Budziszewski Excerpt page12: "There were two great holes in the argument about the irrelevance of God. The first is that in order to attack free will, I supposed that I understood cause and effect; I supposed causation to be less mysterious than volition. If anything, it is the other way around. I can perceive a logical connection between premises and valid conclusions. I can perceive at least a rational connection between my willing to do something and my doing it. But between the apple and the earth, I can perceive no connection at all. Why does the apple fall? We don't know. "But there is gravity," you say. No, "gravity" is merely the name of the phenomenon, not its explanation. "But there are laws of gravity," you say. No, the "laws" are not its explanation either; they are merely a more precise description of the thing to be explained, which remains as mysterious as before. For just this reason, philosophers of science are shy of the term "laws"; they prefer "lawlike regularities." To call the equations of gravity "laws" and speak of the apple as "obeying" them is to speak as though, like the traffic laws, the "laws" of gravity are addressed to rational agents capable of conforming their wills to the command. This is cheating, because it makes mechanical causality (the more opaque of the two phenomena) seem like volition (the less). In my own way of thinking the cheating was even graver, because I attacked the less opaque in the name of the more. The other hole in my reasoning was cruder. If my imprisonment in a blind causality made my reasoning so unreliable that I couldn't trust my beliefs, then by the same token I shouldn't have trusted my beliefs about imprisonment in a blind causality. But in that case I had no business denying free will in the first place." http://www.undergroundthomist.org/sites/default/files/WhyIAmNotAnAtheist.pdf
C.S. Lewis humorously stated the point like this:
"to say that a stone falls to earth because it's obeying a law, makes it a man and even a citizen" - CS Lewis
The following ‘doodle video' is also excellent for getting this point across:
“In the whole history of the universe the laws of nature have never produced, (i.e. caused), a single event.” C.S. Lewis - doodle video https://www.youtube.com/watch?v=_20yiBQAIlk
In other words, law or necessity does not have causal adequacy within itself. i.e. Law is not a ‘mechanism’ that has ever ’caused’ anything to happen in the universe but is merely a description of a lawlike regularity within the universe. The early Christian founders of modern science understood this distinction between law and lawgiver very well,,,
Not the God of the Gaps, But the Whole Show – John Lennox – 2012 Excerpt: God is not a “God of the gaps”, he is God of the whole show.,,, C. S. Lewis put it this way: “Men became scientific because they expected law in nature and they expected law in nature because they believed in a lawgiver.” http://www.christianpost.com/news/the-god-particle-not-the-god-of-the-gaps-but-the-whole-show-80307/
Perhaps the most famous confusion of a mathematical description of a law and the causal agency behind the law is Stephen Hawking’s following statement:
“Because there is a law such as gravity, the universe can and will create itself from nothing. Spontaneous creation is the reason there is something rather than nothing, why the universe exists, why we exist.The universe didn’t need a God to begin; it was quite capable of launching its existence on its own,” Stephen Hawking http://www.dailygalaxy.com/my_weblog/2010/09/the-universe-exists-because-of-spontaneous-creation-stephen-hawking.html
Here is an excerpt of an article, (that is well worth reading in full), in which Dr. Gordon exposes Stephen Hawking’s delusion for thinking that mathematical description and agent causality are the same thing.
BRUCE GORDON: Hawking’s irrational arguments – October 2010 Excerpt: ,,,The physical universe is causally incomplete and therefore neither self-originating nor self-sustaining. The world of space, time, matter and energy is dependent on a reality that transcends space, time, matter and energy. This transcendent reality cannot merely be a Platonic realm of mathematical descriptions, for such things are causally inert abstract entities that do not affect the material world,,, Rather, the transcendent reality on which our universe depends must be something that can exhibit agency – a mind that can choose among the infinite variety of mathematical descriptions and bring into existence a reality that corresponds to a consistent subset of them. This is what “breathes fire into the equations and makes a universe for them to describe.” Anything else invokes random miracles as an explanatory principle and spells the end of scientific rationality.,,, Universes do not “spontaneously create” on the basis of abstract mathematical descriptions, nor does the fantasy of a limitless multiverse trump the explanatory power of transcendent intelligent design. What Mr. Hawking’s contrary assertions show is that mathematical savants can sometimes be metaphysical simpletons. Caveat emptor. http://www.washingtontimes.com/news/2010/oct/1/hawking-irrational-arguments/
Moreover, the same type of confusion arises when atheists' appeal to ‘random chance’ as a causal agent instead of merely a description. When people say that something ‘happened by chance’ they are not actually appealling to a known causal mechanism but are instead using chance as a ‘placeholder for ignorance’ as to an actual causal mechanism. Stephen Talbott puts the situation like this,,
Evolution and the Illusion of Randomness – Stephen L. Talbott – Fall 2011 Excerpt: In the case of evolution, I picture Dennett and Dawkins filling the blackboard with their vivid descriptions of living, highly regulated, coordinated, integrated, and intensely meaningful biological processes, and then inserting a small, mysterious gap in the middle, along with the words, “Here something random occurs.” This “something random” looks every bit as wishful as the appeal to a miracle. It is the central miracle in a gospel of meaninglessness, a “Randomness of the gaps,” demanding an extraordinarily blind faith. At the very least, we have a right to ask, “Can you be a little more explicit here?” http://www.thenewatlantis.com/publications/evolution-and-the-illusion-of-randomness
In other words, when people say that something “happened randomly by chance”, usually a mishap, they are in fact assuming an impersonal purposeless determiner of unaccountable happenings which is, in fact, is impossible to separate from causal agency. i.e. 'every bit as wishful as the appeal to a miracle' Although the term “chance” can be defined as a mathematical probability, such as the chance involved in flipping a coin, when Darwinists use the term ‘random chance’, generally it’s substituting for a more precise word such as “cause”, especially when the cause, i.e. ‘mechanism’, is not known. Several people have noted this ‘shell game’ that is played with the word ‘chance’..
“To personify ‘chance’ as if we were talking about a causal agent,” notes biophysicist Donald M. MacKay, “is to make an illegitimate switch from a scientific to a quasi-religious mythological concept.” Similarly, Robert C. Sproul points out: “By calling the unknown cause ‘chance’ for so long, people begin to forget that a substitution was made. . . . The assumption that ‘chance equals an unknown cause’ has come to mean for many that ‘chance equals cause.’”
Thus, when an atheist states that something happened by chance, we have every right to ask, as Talbott pointed out, “Can you be a little more explicit here?” In conclusion, contrary to how atheists imagine reality to be structured, they, in their appeal to random chance and mathematical description as to being causally adequate within themselves, have, in reality, appealed to vacuous explanations for a ‘causal mechanism’. ,,, ,,,”vacuous explanations for a causal mechanism” reminds me of Lawrence Krauss’s argument against God from a few years ago in his book ‘A Universe from Nothing’,,
Not Understanding Nothing – A review of A Universe from Nothing – Edward Feser – June 2012 Excerpt: A critic might reasonably question the arguments for a divine first cause of the cosmos. But to ask “What caused God?” misses the whole reason classical philosophers thought his existence necessary in the first place. So when physicist Lawrence Krauss begins his new book by suggesting that to ask “Who created the creator?” suffices to dispatch traditional philosophical theology, we know it isn’t going to end well. ,,, ,,, But Krauss simply can’t see the “difference between arguing in favor of an eternally existing creator versus an eternally existing universe without one.” The difference, as the reader of Aristotle or Aquinas knows, is that the universe changes while the unmoved mover does not, or, as the Neoplatonist can tell you, that the universe is made up of parts while its source is absolutely one; or, as Leibniz could tell you, that the universe is contingent and God absolutely necessary. There is thus a principled reason for regarding God rather than the universe as the terminus of explanation. http://www.firstthings.com/article/2012/05/not-understanding-nothing
To put what I consider the main philosophical arguments for God more simply, (at the risk of irritating more than a few philosophers), atheistic materialists do not have a causal mechanism to appeal to to explain how the universe originated, nor do they have a causal mechanism to explain why the universe continues to exists, nor why anything continues to exist in the universe, nor do they even have a causal mechanism for explaining how anything, any particle in the universe, moves within the universe! Here are a few notes along that line:
The Kalam Cosmological Argument (argument from the beginning of the universe) – video https://www.youtube.com/watch?v=6CulBuMCLg0
bornagain77
Frick:
Evolution doesn’t have to search the ginormous whole universe search space looking for innovations. All it does in each new generation is search the space immediately surrounding the existing working copy. If it finds a small improvement, it keeps it.
Frack:
Thorton isn’t talking about physical space; he’s talking about an abstract search space. Every genotype can be visualized as occupying a point in a many-dimensional genetic space. Evolution over time then amounts to following a trajectory in this space. Thorton’s point is that evolution doesn’t search the entire space; it just searches the the tiny portion of the space that is accessible via mutations from the current location.
Neither of you are believable. If there was only a single genome you might have a point. But there isn't. There is no single point. There is no single trajectory. You don't know where in the search space any genome is located. And then there's the paper that claims the entire search space has been covered many times over. Anyone recall that one? Mung
43 wd400 October 31, 2014 at 5:27 pm Cantor. Again, please explain why you think this is the case.
Why do I think what is the case? cantor
I'm gonna miss the Frick and Frack show here at UD.
Can random mutations over a mere 3.8 billion years solely be responsible for wings, eyeballs, knees, camouflage, lactose digestion, photosynthesis, and the rest of nature’s creative marvels? And if the answer is no, what is the mechanism that explains evolution’s speed and efficiency?
That's about as non-darwinian as you can get. Darwin: Evolution is so very slow and clumsy. Not Darwin: Evolution is fast and efficient.
In Arrival of the Fittest, renowned evolutionary biologist Andreas Wagner draws on over fifteen years of research to present the missing piece in Darwin’s theory.
At long last! I just don't understand why we can't all just unite and rejoice together. keiths:
You fell for the hype in the book announcement, and didn’t bother to do the ten extra minutes of research that would have saved you from embarrassment.
Well darn. Of course keiths has read the book and knows the book announcement is mere hype. right keiths? Mung
Cantor. Again, please explain why you think this is the case. News. How do you test it then? And where do quantum mechanics and statistical physics fit in with this scheme? wd400
Really, Denyse? Who's testing it Rich
The question of whether randomness really builds laws WITHOUT information smuggled in is one that can be tested by information theory. At least we are now talking about something testable. News
35 keith s October 31, 2014 at 4:10 pm Unlike evolution, multidimensional optimization is not restricted to nearby points reachable by mutation. Think about it.
That just makes his statement even more wrong. Think about it. cantor
34 wd400 October 31, 2014 at 4:09 pm Cantor, I know a fair amount of math, and think Thorton’s comment is prefectly reasonable.
It's only reasonable if he's talking about finch beaks. But that wasn't the context of this thread. cantor
Denyse, A suggestion I made to kairosfocus on another thread about inappropriate or irrelevant comments. ———- May I suggest a strategy with dealing with all the irrelevant comments to a thread which quickly descend into either name calling or unrelated ideas. Create a parallel thread where everyone can call each other names or discuss OT ideas. Allow their comments to exist but in another place. That way there might be an intelligent discussion and not mindless or immaterial or at best peripheral comments. This would be a good place to start. That way even the anti-ID people might be forced to make responsive statements instead of just negative criticism in its various forms. It would allow people to follow discussions as opposed to have to wade through gibberish. jerry
So I see Denyse has banned Thorton and I've had to register for a 3rd time now. Shameful, UD. You're so uncompetitive and you can't let go of the old ways. Rich
Denyse, Could you explain why Thorton gets banned for referring to "moronic crackpots", while Joe remains despite saying things that are far more incendiary, for months on end? If there are different rules for ID supporters vs critics, perhaps you should make that clear. Double standards ought to be made explicit. keith s
Thorton:
Evolution doesn’t have to search the ginormous whole universe search space looking for innovations. All it does in each new generation is search the space immediately surrounding the existing working copy. If it finds a small improvement, it keeps it.
Cantor tries an expertise bluff:
That makes a lot of sense… if you don’t think about it. Do you have any background whatsoever in the field of mathematics known as operations research?
I call his bluff:
cantor, Evolution is not operations research. Think about it.
cantor tries another bluff:
What thornton described was multidimensional optimization, which is a subfield of operations research. Think about it.
Cantor, Unlike evolution, multidimensional optimization is not restricted to nearby points reachable by mutation. Think about it. And even if you weren't wrong about that, you'd still be wrong to criticize Thorton's statement. Prove me wrong by showing how operations research invalidates Thorton's statement:
Evolution doesn’t have to search the ginormous whole universe search space looking for innovations. All it does in each new generation is search the space immediately surrounding the existing working copy. If it finds a small improvement, it keeps it.
keith s
News,
Curious how few of Darwin’s supporters seem to understand that once one looks for laws instead of randomness, the meaning of evidence changes.
In fact, Wagner doesn't "replace" randomness with laws, he contents laws, or at least common patterns, are built from the randomness. Keith's quote in 10 makes this pretty clear. Cantor, I know a fair amount of math, and think Thorton's comment is prefectly reasonable (perhaps the only sensible thing he said in this thread, in fact). Do want to tell why it's not? wd400
Thorton called Wagner a "moronic crackpot". I mean, even I try to give evolutionists a little more credit than that. But ok, if you insist - one of Europe's leading evolutionary scientists is a moron and a crackpot. I guess we just have to accept it. Thorton said it so it must be correct. :-) Silver Asiatic
News October 31, 2014 at 3:53 pm Thorton, based on 2 above, is no longer with us. Curious how few of Darwin’s supporters seem to understand that once one looks for laws instead of randomness, the meaning of evidence changes. (We know natural selection can do these wonders because evolution happened.) I don’t doubt Wagner would be cautious not to excite them. What specifically in 2 warranted a permanent ban? All I saw was a critique of someone's claim in a pop sci book. Is UD back to the old "agree with me or else" rules? Enkidu
11 keith s October 31, 2014 at 2:59 pm Thorton’s point is that evolution doesn’t search the entire space; it just searches the the tiny portion of the space that is accessible via mutations from the current location.
I think we all understood what his point is. My point is that his point makes sense to him only because he has no mathematical background. It's an argument from credulity. cantor
Thortonista:
Evolution doesn’t have to search the ginormous whole universe search space looking for innovations. All it does in each new generation is search the space immediately surrounding the existing working copy. If it finds a small improvement, it keeps it.
What a moron. Evolution can never get started (no existing working copy can ever be built) because evolution eats its own tail. The probability that random variations will destroy whatever it builds is 100%. The only way to prevent this is to evolve a gene repair mechanism that knows in advance which new sequences to repair and which to try. Question: Why are Darwinists so stupid? Answer: They got religion. Mapou
Thorton, based on 2 above, is no longer with us. Curious how few of Darwin's supporters seem to understand that once one looks for laws instead of randomness, the meaning of evidence changes. (We know natural selection can do these wonders because evolution happened.) I don't doubt Wagner would be cautious not to excite them. News
18 Thorton October 31, 2014 at 3:16 pm No.
That's why your statement makes sense to you. cantor
By whom was he renowned?
He is renowned by the University of Zurich who actually named an evolutionary lab The Andreas Wagner Lab. He's also a Fellow of the American Association for the Advancement of Sciences who "lectures worldwide". Ok, I will accept that he is a "moronic crackpot" - no problem there. But I think that's more than a bit controversial for the evolutionary side of things.
Why do you think a scientific PhD can’t have the occasional crackpot idea?
Because, supposedly, "there is no controversy" and "evolution is the most certain fact in all of science"?
When scientists like Mr. Wagner publish their results in the proper peer reviewed scientific journals I will. When they publish in the popular press with GEE!! WOW!! PARADIGM CHANGING!! all over the dust jacket I tend to roll my eyes.
I'm going to guess that he references peer reviewed work. But let's not overlook the related item that News linked to at the end of this OP, citing a paper in Nature:
The number of biologists calling for change in how evolution is conceptualized is growing rapidly. Strong support comes from allied disciplines, particularly developmental biology, but also genomics, epigenetics, ecology and social science1, 2. We contend that evolutionary biology needs revision if it is to benefit fully from these other disciplines. The data supporting our position gets stronger every day.
It goes on to talk about the paranoia and coverups in the evolutionary community for fear of appearing ID-friendly. That's supposed to be objective science. In any case, it illustrates that the problem is within the biological community itself. I don't think you can hide the fact that a "growing" number of scientists admit that evolutionary mechanisms do not account for the grand claims of evolution, and here it is openly said "evolutionary biology needs revision". In other words, there's something wrong with it, obviously. Again, your challenge, Thorton as I see it, is to try to make your view convincing to skeptics (at least while you're here) and it doesn't help to have other biologists cutting against your views -- whether they're moronic crackpots or not (how did the morons get hired as evolutionary scientists?). Silver Asiatic
Joe said: "Except it isn’t a search." Joe, please explain what you mean. Are you saying that kairosfocus and all other ID proponents who go on and on about searches are wrong? Astroman
16 keith s October 31, 2014 at 3:15 pm cantor, Evolution is not operations research. Think about it.
keiths, What thornton described was multidimensional optimization, which is a subfield of operations research. Think about it. cantor
Rhyme & Reason to Nature. Blind Watchmaker really a Poetic Logician. That certainly helps to explain the terrifyingly beautiful universe. Natural Teleology can account for the terrifying and the beautiful. Theology can too. Materialism can't account for either. ppolish
"Do you have any background whatsoever in the field of mathematics known as operations research?" AKA management science? decision trees? I though IDs treatment of evolution (as a random search of all parameter space) was as bad as possible. But no, please, go on..... REC
ppolish
Thorton, you called Dr Wagner a non science paper writing crackpot
No I didn't. I said this idea was published in the popular press and not in a peer reviewed science journal. That makes it nothing more than pop science speculation. Please learn how to read for comprehension Thorton
Except it isn't a search. Joe
Thorton, you called Dr Wagner a non science paper writing crackpot, which he most certainly is not. Sharp dude actually. Just wanted to give you a couple examples of genuine non science paper writing crackpots. That's all. ppolish
keith s
Thorton’s point is that evolution doesn’t search the entire space; it just searches the the tiny portion of the space that is accessible via mutations from the current location.
Exactly. Thanks. Thorton
Do you have any background whatsoever in the fields of science known as biology or genetics?
You don't. Joe
cantor
Do you have any background whatsoever in the field of mathematics known as operations research?
No. Do you have any background whatsoever in the fields of science known as biology or genetics? Thorton
thorton:
Evolution doesn’t have to search the ginormous whole universe search space looking for innovations.
Unguided evolution isn't a search, that is why it is useless and can't explain anything beyond disease and deformities.
If it finds a small improvement, it keeps it.
Improvement is relative and includes loss of function. Joe
cantor, Evolution is not operations research. Think about it. keith s
ppolish
Thorton, when in the last time Dawkins or Tyson published a peer reviewed science paper? Have they ever? Couple crackpots right there. Angry crackpots.
To my knowledge neither of those two gentlemen has ever published a book in the popular press claiming to have discovered an entirely new law of nature. Did you have some sort of point? Thorton
2 Thorton October 31, 2014 at 1:53 pm Evolution doesn’t have to search the ginormous whole universe search space looking for innovations. All it does in each new generation is search the space immediately surrounding the existing working copy. If it finds a small improvement, it keeps it.
That makes a lot of sense... if you don't think about it. Do you have any background whatsoever in the field of mathematics known as operations research? cantor
Keith, you forgot to emphasize last sentence: "There's rhyme and reason to how life evolved." ID thrives on rhyme and reason. Mountains of evidence for rhyme and reason. ppolish
Which, by the way, is why we laugh at all of KF's blather about needles in haystacks. He doesn't get it at all. keith s
Thorton:
volution doesn’t have to search the ginormous whole universe search space looking for innovations. All it does in each new generation is search the space immediately surrounding the existing working copy. If it finds a small improvement, it keeps it.
tjguy:
We have organism A living in say, the eastern half of the US. Up in Maine, a particularly helpful mutation happened. The organisms living in Maine did not have to search the whole population of organism A that lived in the eastern half of the US to benefit from that mutation. They found it in their own backyard. Then slowly over time, that mutation spread and became established in the whole population of organism A all over the eastern half of the US. Then a mutation happened in PA and the same thing happened. And this continued until the organism evolved into a new kind of organism? I’m probably not tracking with you 100%. Can you please explain for me what you mean more clearly?
tjguy, Thorton isn't talking about physical space; he's talking about an abstract search space. Every genotype can be visualized as occupying a point in a many-dimensional genetic space. Evolution over time then amounts to following a trajectory in this space. Thorton's point is that evolution doesn't search the entire space; it just searches the the tiny portion of the space that is accessible via mutations from the current location. keith s
Denyse, Why are your journalistic standards so lax? In less than ten minutes of searching, I found this interview:
World Science Festival: We’ve all heard of ‘survival of the fittest.’ What does ‘arrival of the fittest’ mean? Andreas Wagner: It’s actually taken from a famous botanist, Hugo de Vries, from the last sentence of a book he wrote in 1905: “Natural selection can explain the survival of the fittest, but it cannot explain the arrival of the fittest.” The arrival of the fittest here simply means how new traits originate. For example, there is this interesting fish called the winter flounder, which lives close to the Arctic Circle, in very deep, cold waters—so cold that our body fluids would freeze solid. Yet this fish survives there. It turns out that its ancestors discovered a new class of antifreeze proteins that work a bit similar to the antifreeze in your car. It’s very easy to understand how natural selection could help such an innovation spread through a population of fish, since it helps them survive; but it doesn’t tell us anything about how the innovation arose in the first place. That’s a puzzle that’s been with us since Darwin’s time. WSF: So is it all down to random chance—genes get juggled, and stuff arises? AW: There’s a whole lot more to it. Random chance still plays a role—we know that the DNA of organisms changes randomly. But there’s actually an organization process that helps these organisms discover new things. Think of a library that is so large that it contains all possible strings of letters. Each volume in this library contains a different string, and there would be many more volumes than there are atoms in the universe. We could call that a universal library. It would contain a lot of nonsense, but it would also contain a lot of interesting texts—your biography, my biography, the life story of every human who’s been alive, the political history of every country, all novels ever written. And it would also contain descriptions of every single technological innovation, from fire to the steam engine, to innovations we haven’t made yet. Nature innovates with libraries much like that one. A protein is basically a string of letters, corresponding to one of 20 different kinds of amino acid building blocks in humans. A single protein could be 100 amino acids. So we can think of a library of all possible amino acid texts. When evolution changes organisms, what it does is explore this library through random changes in DNA, which are then translated by organisms into changes in the amino acid sequences of proteins. A population is like a crowd of readers that goes from one text to the next. Now, how would you organize a library if you wanted to easily find the text on a particular technology? You would have a catalog, and have all the texts about, say, transistors in one section of the library. That works for us because we can read catalogs, but in nature’s library it’s very different. Evolution doesn’t have a catalog; its readers explore the library through random steps. There’s also something that’s very curious about this library. You think of antifreeze proteins as being a solution to a very specific problem that nature faces: “How do I keep this fish alive?” You may think there is only one single solution to this, one amino acid strain that provides this protection. But if that were the case, evolution would have a serious problem, because the library is so huge it could never be explored in the 4 billion years life has been around, or even 40 billion years. But it turns out there is not just one text that solves the problem, there are myriad texts that all have a different amino acid sequence specifying antifreeze protection. And these texts are not clustered in one corner of the library; they’re spread out all over… WSF: So you’re more likely to run into it! AW: Exactly. This particular organization also helps because when you, as a reader in a library of books, pick up a text that doesn’t have any meaning, you’d put it aside. But in evolution’s libraries that’s completely different: if an organism has an antifreeze protein, and a single mutation changes the sequence and disrupts the function of the protein, it becomes useless and the organism dies. Missteps in nature’s libraries are fatal. A large network of synonymous texts ensures that nature’s readers can stay on a path that specifies, say, antifreeze function. This network organization buys an additional benefit: As a population explores the library, near the network of synonymous texts you might find new innovations—superior antifreeze proteins, perhaps. So the peculiar organization of this library allows blind exploration, the preservation of things that work already, and the discovery of new things that allows the arrival of the fittest. WSF: What’s the main thing you want readers to take away from the book? AW: That there’s a fascinating world out there that Darwin didn’t have any idea about, and that really helps us explain how evolution can work. Evolution has been criticized from various quarters by people who say, “well, it can’t all just be random change.” The book shows principles that are in agreement with Darwinism, but go beyond it. There’s rhyme and reason to how life evolved. [Emphasis added]
There is nothing anti-Darwinian about Wagner's thesis. DNA changes randomly, as he stresses. It's nothing but selection working on random mutations. Wagner just adds -- and this is not original to him, by any means -- that the fitness landscape isn't limited to one solution per problem. We knew that already, though IDers like kairosfocus try to downplay it with their "islands of function" rhetoric. Which makes this statement of yours ridiculous:
One thing for sure, if an establishment figure can safely write this kind of thing, Darwin’s theory is coming under more serious fire than ever.
You fell for the hype in the book announcement, and didn't bother to do the ten extra minutes of research that would have saved you from embarrassment. keith s
Thorton says:
Evolution doesn’t have to search the ginormous whole universe search space looking for innovations. All it does in each new generation is search the space immediately surrounding the existing working copy. If it finds a small improvement, it keeps it.
My question is this: How do you know that such "innovations" even exist? How do you know that there are enough innovations in the immediate surrounding to take the organism to the next lever? How do you know that an evolutionary pathway from A to B even exists anywhere in space? I would think that searching the immediate area would severely handicap an organism because the chances of an innovation existing would be greatly lowered. Let me see if I understand what you are trying to say. We have organism A living in say, the eastern half of the US. Up in Maine, a particularly helpful mutation happened. The organisms living in Maine did not have to search the whole population of organism A that lived in the eastern half of the US to benefit from that mutation. They found it in their own backyard. Then slowly over time, that mutation spread and became established in the whole population of organism A all over the eastern half of the US. Then a mutation happened in PA and the same thing happened. And this continued until the organism evolved into a new kind of organism? I'm probably not tracking with you 100%. Can you please explain for me what you mean more clearly? Thanks. tjguy
Thorton, when in the last time Dawkins or Tyson published a peer reviewed science paper? Have they ever? Couple crackpots right there. Angry crackpots. ppolish
Silver Asiatic
Interesting. “Renowned evolutionary biologist Andreas Wagner
By whom was he renowned? Certainly not by anyone I know. Do you believe all the over-the-top hyperbole you read on every publisher's advertisements? Why do you think a scientific PhD can't have the occasional crackpot idea?
More seriously, you might consider engaging scientists like Mr. Wagner to try to convince him/them of his errors
When scientists like Mr. Wagner publish their results in the proper peer reviewed scientific journals I will. When they publish in the popular press with GEE!! WOW!! PARADIGM CHANGING!! all over the dust jacket I tend to roll my eyes. Thorton
Some sort of built-in evolvability might be the next sort of explanation. It's similar to self-organizational principles. It answers the question of "why do organisms want to survive"? Silver Asiatic
Good Lord but there are some moronic crackpots out there.
Yeah and they even invited them all in when they scrapped the ban list... Sebestyen Sebestyen
Oh please, not the zombified remains of this stupid Creationist argument again.
Interesting. "Renowned evolutionary biologist Andreas Wagner" was somehow convinced by stupid Creationists and he turned against Darwin. Yes, I can see why Creationists are such a threat to science. Their arguments end up convincing renowned evolutionists. It could be some sort of creationist mind-control.
Good Lord but there are some moronic crackpots out there.
And some of them are "renowned evolutionary biologists" apparently. More seriously, you might consider engaging scientists like Mr. Wagner to try to convince him/them of his errors. While I can understand your frustration, I'd think you'd want to express it to him and his editors/publishers rather than here. I mean he's working in your own field of study so there should be some common ground to build on. Silver Asiatic
The debate of the future will be Naturally Directed or "Unnaturally" Directed. The Undirected argument is sinking fast. Natural Teleology vs Supernatural Theology. Bringing the debate up to the next level. ppolish
The question “how does nature innovate?” often elicits a succinct but unsatisfying response – random mutations. Andreas Wagner first illustrates why random mutations alone cannot be the cause of innovations – the search space for innovations, be it at the level of genes, protein, or metabolic reactions is too large that makes the probability of stumbling upon all the innovations needed to make a little fly (let alone humans) too low to have occurred within the time span the universe has been around
Oh please, not the zombified remains of this stupid Creationist argument again. Evolution doesn't have to search the ginormous whole universe search space looking for innovations. All it does in each new generation is search the space immediately surrounding the existing working copy. If it finds a small improvement, it keeps it. Good Lord but there are some moronic crackpots out there. Thorton
Given reductionism, these "laws" have to be physical or chemical laws. So they're newly discovered properties or results of chemical interactions? As News says, however, it doesn't matter. What chance does Mr Wagner have of convincing the entire Darwinian-world that he found the 'hidden principles' that guide evolution? The only thing important here is that there's yet another establishment figure dropping a bomb (we like that metaphor lately) on Darwinism. "There are no weaknesses in evolutionary theory". "There is no controversy". Silver Asiatic

Leave a Reply