Uncommon Descent Serving The Intelligent Design Community

Darwinists are Delegitimizing Science in the Name of Science

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

What Darwinists don’t recognize is that, in the name of promoting science, they are actually promoting skepticism about what can be trusted in the name of science.

Bears evolved into whales? No, that’s been rejected. “Scientists” suggest that whales might have evolved from a cat-like animal, or a hyena-like animal, or (fill in the blank).

“It is thought by some that…”

This is “science”?

Evolution is a fact, if evolution is defined as the observation that some living systems are not now as they once were. According to this definition I count myself as an evolutionist.

But Darwinists are unwilling to acknowledge their ignorance concerning how this all came about, and persist in presenting unsupported speculation in the name of science.

This is ultimately destructive of the scientific enterprise. When people read such things as “science has discovered…” or “scientific consensus assures us that…” or some such, people are likely to assume that they are being conned, even if this is not the case, because they have been burned by so many claims in the past that turned out to be transparently false or eventually invalidated by evidence.

Based upon what I’ve learned over my 60 years of existence — mathematics, chemistry, physics, music and language study, computer programming, AI research, and involvement in multiple engineering disciplines — I find this Darwinism stuff to be a desperate attempt to deny the obvious: design and purpose in the universe and human existence.

The irony is that Darwinists are doing much harm to that which they presume to promote — confidence in claims made in the name of science.

Comments
Correction: Given a 10,000 word vocabulary, a 1000 word essay would exist in a sequence space of (10^5)^(10^3) or 10^5000. Making the assumption, for the sake of the example, that there is a geometric 50% of meaningful sequences, out of 10^5000 possibilities we would have 10^2500 meaningful configurations. Now with that number, we’ve long since exhausted the storage capacity of the universe, by so many orders of magnitude as to be completely unfathomable, many times over. Yet somehow we can determine the meaningful sequences. My math in the above paragraph wrong, so I offer this revision: Given a 10,000 word vocabulary, a 1000 word essay would exist in a sequence space of (10^4)^(10^3) or 10^4000. Making the assumption, for the sake of the example, that there is a geometric 50% of meaningful sequences, out of 10^4000 possibilities we would have 10^2000 meaningful configurations. Now with that number, we’ve long since exhausted the storage capacity of the universe, by so many orders of magnitude as to be completely unfathomable, many times over. Yet somehow we can determine the meaningful sequences. Hopefully that fixes it. Gone for the weekend, m.i. material.infantacy
Scott: Me too. gpuccio
Collin, We already know the answer to that. The same mainstream media that bashes ID and promotes darwinism produces films like Contact, in which a signal from outer space is quickly recognized as a message. Once a trace of specificity is determined, there is never any discussion of natural origins. Before someone says that's just the media, don't forget who wrote Contact. The contradiction couldn't be plainer, but abandon all hope of getting it to register. It just won't. It is ideological. It is religious. The explanation that is assumed in once case is outright rejected without consideration in the other, and vice versa. Objectivity is a pretense. This evidence of bias, closed-mindedness, and willingness to put the conclusion before the evidence sits plainly in front of everyone's noses. Be grateful if you're too stupid to be that smart. I am. ScottAndrews2
Can I offer an analogy? I don't do so to PROVE anything, only to illustrate and explain what I am trying to say. Let's say we find a machine on Mars. It is made of metal and rubber and glass and has a hard drive. We also discover a code on the hard drive that governs the actions of the machine. We also discovery that the machine is self replicating. Years later some aliens show up and claim that they designed and built the machine. What scientific principles do we use to determine if this is a true claim or not, assuming we can't get the aliens to offer proof? I assert that ID-ers are the ones who are trying to figure out what those scientific principles might be and they should be applauded. Instead, they are usually dismissed due to the possible religious motivations they may have for their work. ("Creationism in a cheap tux"). Collin
I am firmly convinced that no theory of human evolution can be regarded as satisfactory unless the revelations of Piltdown are taken into account. ~ Arthur Keith bevets
A good Darwinist pays no attention to such trifles as common sense or evidence. It is their unbeatable tactic.
Stunning, just stunning. You think 150 years of diligent research, whose results are displayed on groaning shelves in thousands of institutions, constitutes a failure to pay due regard to evidence? Quite simply, the existence of a phenomenon is not evidence of its proposed cause, nor is its resemblance to something else evidence of a relationship beyond that resemblance. You need more. Chas D
A good Darwinist pays no attention to such trifles as common sense or evidence. It is their unbeatable tactic. Eugene S
Cases in point of such unsupported statements. kairosfocus
Liz, Sorry to see you go. If you like, send me an e-mail at GilDodgen@gmail.com with your mailing address and I'll send you a set of my classical piano CDs with program notes, and the story of my wonderful piano teacher who inspired me from the age of seven. I know you'll enjoy the music. GilDodgen
Elizabeth: Cheers :) I hope I can come and visit... gpuccio
OK, sorry I lost my cool there, gpuccio. And I understand your fatigue. I am in the grip of something similar, I think, and so I'm going to take a break from UD. I've learned a lot here, but things have got to the going-round-in-circles stage, and I think a break would benefit everyone :) And I've been neglecting my own site. If anyone from here wants to drop by they'd be genuinely very welcome. I'd love to see more theists drop by, and maybe stay for tea. Or fish. Cheers Lizzie Elizabeth Liddle
Elizabeth: Well, I was sure that my post would evoke some reaction. It was intended to. But not necessarily from you. Because, as a fact, you are certainly among the "intelligent and sincere". Others are not. But just the same, I take your blame. Because I do think that your cognition is deformed by serious cognitive biases. I am not saying that because I don't agree with your views (there are so many people in my life with whose views I don't agree, and certainly I don't believe that all of them have cognitive deformations. And I am not saying that because I don't understand your views. I think I understand them very well, although I certainly don't understand why you entertain some of them. Please note that I have not said "them", but "some of them". That is a point that you seem not to understand, of me. I am not disturbed in any way by others being atheists, or darwinists, or strong AI fans (well, maybe that just a little... we are human, after all :) ). I am not disturbed by other's convictions, faiths, moral behaviours, moral ideas, and so on. Not at all. I am, unfortunately, disturbed by cognitive inconsistency and bias. And I take your blame, and the full responsibility for it, because you have sometimes (I am saying "sometimes") disturbed me for that. But others have done that much more. And without being, in any way, such fine persons as you are. To them, probably, my post was specially dedicated. Because, you see, I am really tired of certain things. But, being you the fine person you are, you were the first (maybe you will be the only one) to blame me. I am honored of that. With sincere friendship, and sorry for your sorrow, Giuseppe gpuccio
gpuccio
KF: My compliments for your excellent biochemistry! Speaking in private, now that nobody can hear us :) , I am really tired. I have tried to believe that darwinists, even with all the basic deformations of thought deriving form having to believe in a completely false theory, still could be able to reason correctly in many occasions. And sone of our best interlocutors here, in the course of years, had reinforced that hope: I know we cannot convince them in the end, but at least we can discuss, sometimes constructively (by the way, sincere thanks to all those who have truly done that). But the recent accumulation of nonsense from the last wave of interlocutors, especially about fundamentals of the scientific thought that should be very clear to everybody, is really discouraging. And the most frustrating thing is that such a nonsense seems to come from otherwise intelligent (well, maybe not all of them), and sincere (well, probably not all of them) people. I am beginning to think that the cognitive deformations of darwinist reductionism and of strong AI theory on human mind are more serious than I believed.
I have to say that I find the response that a view you either disagree with or don't understand must be the result of "cognitive deformation" quite extraordinarily arrogant, and I have seen nothing comparable to your last paragraph from any agnostic/atheist on this site. In fact, I think I'll take my "biased, prejudiced heart" elsewhere, right now, and leave you to yours. In sorrow, Lizzie Elizabeth Liddle
gpuccio: "And the most frustrating thing is that such a nonsense seems to come from otherwise intelligent (well, maybe not all of them), and sincere (well, probably not all of them) people." ===== What you are describing is more of a heart condition as opposed to a mind condition. The heart as used figuratively in the bible is the seat of motivation. At 2 Corinthians the third chapter Paul gives a good historical illustration where he described the materialistic problem with the ancient fleshly minded Israelites. This in fact fits the subject perfectly here. Basically he recounts the historical scene of Moses coming off Mount Sinai with the Law. The Israelites see the glow of rays being emitted from Moses face as a result of being in God's presence and they make excuses for not wanting to hear what God had said in the Law for them by insisting Moses go put a veil over his face because they said it bothered and made them feel nervous. It was a lame excuse. All that these faithless people were interested in was getting to the promised land and striving after what's in it for me(in the materialist sense) lifestyle. They only wanted for themselves the lifestyle they observed of the Egyptians for themselves in their very own country. They had zero appreciation for the spiritual things contained in the Law Covenant which was to form the foundational basis for their Constitution for running their Nation. Paul goes on to show in verse 13 of chapter 3 what the real problem was. It wasn't their minds so much as their stubborn faithless arrogant heart condition: 2 Corinthians 3:14 Amplified Bible (AMP) "14) "In fact, their minds were grown hard and calloused [they had become dull and had lost the power of understanding]; for until this present day, when the Old Testament (the old covenant) is being read, that same veil still lies [on their hearts], not being lifted [to reveal] that in Christ it is made void and done away." **** This is comparable to the present Atheistic/Agnostic attitudes revealed here. No one but no one will convince them otherwise. There is no amount of evidence you can provide that will shake their Secularist Faith. Again, it isn't that they don't know or see the same things you do or understand the same definitions you do. It's what's inside their biased prejudiced hearts that prevents them from seeing any type of truth of a matter. And ultimately that is their free-willed self-determinationed right to view things as they see fit. There is no predestination here. The truth of the matter does not hinge on their arrogant condescending approval or acceptance of it. But it still remains the truth despite their "What is truth?" game playing. Eocene
KF: My compliments for your excellent biochemistry! Speaking in private, now that nobody can hear us :), I am really tired. I have tried to believe that darwinists, even with all the basic deformations of thought deriving form having to believe in a completely false theory, still could be able to reason correctly in many occasions. And sone of our best interlocutors here, in the course of years, had reinforced that hope: I know we cannot convince them in the end, but at least we can discuss, sometimes constructively (by the way, sincere thanks to all those who have truly done that). But the recent accumulation of nonsense from the last wave of interlocutors, especially about fundamentals of the scientific thought that should be very clear to everybody, is really discouraging. And the most frustrating thing is that such a nonsense seems to come from otherwise intelligent (well, maybe not all of them), and sincere (well, probably not all of them) people. I am beginning to think that the cognitive deformations of darwinist reductionism and of strong AI theory on human mind are more serious than I believed. gpuccio
Acipenser: Youe point here is totally irrelevant. Hemoglobin has nothing to do with this discussion. You are only stating the obvious, that proteins interact and accomplish their functions though various biochemical mechanisms, thye binding of ligands and conformational changes being the most common. OK, we all know that, and so? You seem to state that to erase the concept of the symbolic correspondence of the codons to aminoacids in the genetic code. But that is complete folly, mixed up reasoning at its worst. The point with the genetic code, that all biologists understand, is that it is a code. The codons in themselves have no special relationship with the aminoacids they represent. The tRNA in itself is not capable to couple its anticodon to the right aminoacid. So, for the full mechainsm of translation to go on, two differnet things must have happende: a) The sequence in the DNA protein coding genes must correspond to the functional sequence of amionoacids for a specific protein, according to a well definit abstract symbolic code (the genetic code). b) 20 very complex proteins, the aminoacyl tRNA Sybthetases, must have a specific configuration with very efficient active sites and foldings, all different, and ascribable to two different classes of proteins, so that each enzyne independantly is able to recognize a specific aminoacid and the specific anticodon on the tRNA, exactly according to the same abstract symbolic code that is used in the storage of information in the DNA gene. IOWs, the coding must be respected both in thw writing of the information (which is the mystery we all here are trying to interpret), and in the reading of the information, by the complex translation machinery, and especially by the 20 enzymes, the true depositaries of the decoding key. The laws of biochemistry have nothing to do with all this level of organization, although the implementation of the scheme is obviously realized by the laws of biochemistry. You go on saying: but it is biochemical configurations that the enzymes recognize! That si so obvious and trivial that I realloy don't understand how you dare to offer it as a deep intuition about reality. Your reasoning is superficial and wrong. The enzyme recognizes the anticodon and the aminoacid by their configuration, but only because it is built to do that. A lot of information is needed for that machinery to work that way (hundreds of aminoacids for each of 20 of the most ancient and efficient proteins in the world). When we read, or when an OCR program analyzes a scanned text, the letter "A" is recognized by its form. Then the OCR program transforms it into a digital value. In the same way, the enzyme recognizes the anticodon in the tRNA by its form, and transforms it into the correct aminaocid, that is a digital value in the string of aminoacids that is the final protein. And you also should know that, in protein science, the connection between primary sequence and the final configuratio is extremely complex, as Petrushka loves to remind us for not so clear purposes. It can be understood and computed, but huge coomputational resources are needed just to do that for one single protein. So, the levels of abstraction in the whole mechanism are really stunning: a) DNA is the depository of a treasure trove of information about protein function that even for us would be impossible to accumualte. b) That information, for reasons that nobody can explain, is coded thorugh an abstract symbolic code of 64 values written in base four characters. c) 20 very complex proteins are structured in a very specific way just to read the code. d) A very complex machinery, including the ribosome and a lot of proteins, builds the protein primary structure according to the information received from mRNA. e) That primary structure then folds into the final functional structure, sometimes by itself, more often by the help of other very complex proteins. And this is only the essence. I have overlooked all the regulation nodes, post transcriptional, post translational, and so on. So, if you want, go on thinking that all that is only a matter of biochemical configurations and similar. I am no more surprised by anything, in the darwinist field. gpuccio
Acipenser: I certainly am. And still I don't understand your point about the genetic code and DNA protein genes. gpuccio
Aci, the interactions between AAs in sequences are consequential on specifying and chaining. They are a result of the right chain, allowing folding and function. The chain proper is a rxn between COOH and NH2 ends. Any AA can follow any other, but the functional sequences that fold right and function, are determined independent of chaining. In life, they are informationally specified. That should tell us something. The codon-anticodon match is simply used to say which loaded tRNA with the loading enzyme-set AA on the standard CCA coupler end opposite the anticodon will be allowed to elongate the chain next, till STOP. Again, the chaining is informationally controlled.GEM of TKI kairosfocus
Onlookers: The AA is joined to the CCA end of the tRNA by a loading enzyme. This is the OPPOSITE end to the anticodon that interacts with the mRNA in the Ribosome, i.e. there is no direct chemical relationship between codon, anticodon and AA. In addition, the AA coupler is standard: CCA -- COOH, so in principle we could synthesise an enzyme to load a different AA. Indeed, I think some experiments have been done that reprogram stop codons to carry additional AAs. In turn, AAs chain NH2-COOH ends, and the string of AAs allows any AA to follow any other. The process is a translation not a matter of chemistry or any other force of necessity. This has of course been pointed out before, repeatedly, just brushed aside. The sequences of nucleotides in D/RNA is not chemically constrained, and the AA sequence in proteins is not chemically constrained. The sequences found in life are functionally specified, and are informationally programmed based on stored info, using a translation system and regulatory systems that call up as needed. Information and information processing of highly sophisticated forms lie in the heart of cell based life. All of this points strongly to design. An it seems the real root of many objections on points that could otherwise have been simply resolved, seems to be that that outcome cuts sharply across the institutionally dominant view, creating all sorts of conflicted thoughts. GEM of TKI kairosfocus
"Explaining the operation of a system is a crucial first step in explaining the origin of that system."
Granted. A crucial first step in explaining the origin of a system may very well be determining the intricacies of its operation, especially considering the presupposition that necessity leads step-wise from the simple to the complex. However that still leaves an enigma of progression from simple necessity to integrated functional complexity, yet to be unraveled. It seems premature to assume that simple breeds sophisticated, absent the intermediaries. Thanks for the conversation. m.i. material.infantacy
ACI: As I stated ignoring the obvious flaws in the computer-biology analogy is common with ID proponents. BIPED: I am not retreating to a computer analogy. I am pointing to the observable physical evidence entailed by the transfer of information. ACI: Upr, certainly you are as you’re opening comment on this thread demonstrates.
So, if an ID proponent doesn't speak to the “computer analogy” then they are avoiding its presumed flaws, and if they do say something about it, then they are retreating into a flawed analogy. That’s a nice way to isolate your assumptions from any critique. But your core assumption in the exact topic of my opening comment. You are suggesting that I must address your comment before I can probe your assumptions. That is literally insane. I can immediately ask you a question (which perhaps can’t be answered) yet within that question I can make an assumption that - by your standard here – cannot be then questioned. This is something no rational person would tolerate even for an instant. So I am not sure why you think it flies here and now. In any case, I won’t quibble over this point, but will instead take you at your word.
I would be glad to address the issue you raised once you’ve addressed my questions
Okay, this is the exact question you posed to me in 5.1.4.3.13 “Could you give some examples of ligand binding in computers”. My answer to your question is “No I can’t”. Of course, I would expect to either, and I know enough about information transfer to know that it doesn’t matter anyway. There are no transistors in my fountain pen, there are no magnetic lines of iron oxide when I speak. The comparison being made is not concerned with the system used to transfer information; it's being made to the dynamics of the transfer itself. Hence, your assumption is flawed. Now I’ve answered your question, you are bound by your word to answer mine. I will repeat it for you here:
Will you take a minute to consider the evidence, and please answer this question: If on one hand we have a thing that “is a genuine” representation, and on the other hand we have something that “just acts like” a representation, can you look at the physical evidence and tell me the distinction?
Upright BiPed
Do these know how hard it is to get a protein folding sequence, before we get to actual biofunction? Isn't it about 1 in 10^ 70 or so of AA sequence space, per Axe's studies? kairosfocus
GB: Pardon, but you do come across as one who has not taken time to examine the observed facts, and what has long since been on the table, and is instead tossing around talking points as one looking for a fight. If you are genuinely serious, I suggest you work your way through the post here as an introduction (and also this on the underlying epistemology of empirically anchored inference to best explanation), then look at this on the relevant metric [where also this survey may be helpful -- at least, watch the vid at Fig I(i)b.] -- if you find the just above linked exchange with Dr Liddle by GP not complete enough -- and come back to us. When you do so, summarise what you have learned and then give your responses, with grounds. Please pay particular attention to the discussion of the per aspect explanatory filter and the derivation and use of the closely linked log reduced Chi_500 metric in your response. Silly playground rhetoric stuff like "round and round you go . . . " etc frankly comes across as really disrespectful to a man who has done his homework, and has spent a considerable amount of time in serious and sober-minded, thorough discussion on a matter. To be frank, you come across as someone who has not got the underlying epistemology of warrant for empirical, inductive knowledge claims straight, which is the grounds for science. So, kindly also satisfy us that you understand how inductive warrant works. As a footnote, what is going on in the background is inductive inference on tested, empirically reliable sign. This, I discussed in a background, here, as follows:
Signs: I observe one or more signs [in a pattern], and infer the signified object, on a warrant: I: [si] –> O, on W a –> Here, as I will use “sign” [as opposed to "symbol"], the connexion is a more or less causal or natural one; e.g. a pattern of deer tracks on the ground is an index, pointing to a deer. (NB, 02:28: Sign can be used more broadly in technical semiotics to embrace “symbol” and other complexities, but this is not needed for our purposes. I am using “sign” much as it is used in medicine, at least since Hippocrates of Cos in C5 BC, i.e. to point to a disease on an objective, warranted indicator.) b –> If the sign is not a sufficient condition of the signified, the inference is not certain and is defeatable; though it may be inductively strong. (E.g. someone may imitate deer tracks.) c –> The warrant for an inference may in key cases require considerable background knowledge or cues from the context. d –> The act of inference may also be implicit or even intuitive, and I may not be able to articulate but may still be quite well-warranted to trust the inference. Especially, if it traces to senses I have good reason to accept are working well, and are acting in situations that I have no reason to believe will materially distort the inference. e –> The process of observation may be passive, where I simply respond to effects of the sign-emitting object; or it may involve active emission of signals or interaction with the object. For instance, we may contrast passive and active sonar sensing here, noting that both modes are used by sea-animals as well as technical systems. (NB: “Object” is here used in a very broad sense [u/d 02:17: it includes objects and credibly objective states of affairs].) f –> A sign can also be iconic, i.e sufficiently resembling [u/d, 02:17: or representing] the object to be recognisable as a representation, as a general class [a rock shaped like a face] or in specific [a sculptural portrait]. [u/d 02:28: In the case of a mace in its rest in Parliament, unless an elaborate form of a former weapon sits there, Parliament is not legitimately in session.]
Digitally coded, functionally specific, complex info [dFSCI] is easy to see. All posts in this thread with messages in English or another language of at least 72 ASCII characters are cases in point. None of us probably has met any of the other of us in the flesh, but we routinely know that something as complex and functionally specific as these posts is maximally unlikely to have happened by a burst of lucky noise on the Internet. That is, routinely, we rely on dFSCI as a reliable sign of intelligent action. And, that confidence is amply confirmed. That none of us actually includes you, at least when you are not playing at being selectively hyperskeptical. From the sign of the posts in thread you accept that GP is a real person, not a strange burst of noise on the net. We can extend this to the net as a whole, and see that he net is full of cases in point where dFSCI is a reliable sign pointing to intelligent design. Go over by a university library, and look at the books in it, the picture is the same. To test this idea, there have been infinite monkey theorem experiments. What they tell us is that chance based random walks can generate up to maybe 20 - 25 meaningful words of text, i.e a configuration space of about 10^50 is searchable. Wikipedia has an interesting note on this:
The theorem concerns a thought experiment which cannot be fully carried out in practice, since it is predicted to require prohibitive amounts of time and resources. Nonetheless, it has inspired efforts in finite random text generation. One computer program run by Dan Oliver of Scottsdale, Arizona, according to an article in The New Yorker, came up with a result on August 4, 2004: After the group had worked for 42,162,500,000 billion billion monkey-years, one of the "monkeys" typed, “VALENTINE. Cease toIdor:eFLP0FRjWK78aXzVOwm)-‘;8.t" The first 19 letters of this sequence can be found in "The Two Gentlemen of Verona". Other teams have reproduced 18 characters from "Timon of Athens", 17 from "Troilus and Cressida", and 16 from "Richard II".[21] A website entitled The Monkey Shakespeare Simulator, launched on July 1, 2003, contained a Java applet that simulates a large population of monkeys typing randomly, with the stated intention of seeing how long it takes the virtual monkeys to produce a complete Shakespearean play from beginning to end. For example, it produced this partial line from Henry IV, Part 2, reporting that it took "2,737,850 million billion billion billion monkey-years" to reach 24 matching characters: RUMOUR. Open your ears; 9r"5j5&?OWTY Z0d...
But, that is 1 in 10^100 of the config space specified by 500 bits. The 10^57 or so atoms of our solar system -- our practical universe, running at top speed for nature, the Planck time, would in 10^17 s -- more or less the age of the solar system on the usual timeline -- would go through 10^102 Planck time quantum states, 1 in 10^48 of the possibilities for 500 bits. Note, it takes about 10^30 PTQs's to carry out the fastest chemical reactions. Converting into familiar terms, that's about a 1 straw sample form a haystack 3 1/2 light days across, at 186,000 miles/s for light. Even if a whole solar system were hiding in the hayst6ack, a one straw blind sample would be maximally likely to only pick up what is typical: a straw. Sometimes, there is too much haystack to expect to be able to find a needle on a blind search. And that is the point of dFSCI. if you were to depend on infinite monkey type chance processes and a trial and error filter, using he resources of our solar system, you would be maximally unlikely ever to hit on a case of dFSCI by blind luck. yet, we intelligent designers complete posts here in a few minutes. That's because we are using skill, knowledge and intelligence, not chance and necessity. That is why dFSCI is a pretty good sign of intelligent cause. And no, that is not tail chasing question begging, no matter what some silly objector talking point at one of the usual sites that try to denigrate design theoretic reasoning may have told you. (In case you do not recognise it, this reasoning is actually very close to the statistical reasoning that underlies the second law of thermodynamics, as the Wiki article will hint at, if you read the whole article.) So, now, let us hear your response at a very different level from the above. GEM of TKI kairosfocus
I'm not playing twister, I'm attempting to clarify my terms. You appear to be making a claim that the properties of the nucleotides determine their sequence along the backbone. If not, then you're in basic agreement with my point. It's either one or the other. Either the protein sequence necessitates which nucleotides are required and in what order, or the nucleotides determine the sequence in which they are composed. Which is it? material.infantacy
No, you're ecplanation is not clear at all and appears to me to be an excercise of the game of Twister. The sequence is determined by which bases are present and in what order. In that regard a specific protein does require a specific sequence and not any sequence will result in the same protein although there is great plasticity in this regard as well. For example vertebrate hemoglobin's demonstrate a wide variety of affintity for oxygen ranging from highy sigmoidicity to hyperbolic binding curves. But they are still all classified as hemoglobin even if the sequences of the bases coding for that protein differ. If you wish to discuss 'necessitates' then you will have to discuss the origin which both of us, and all of humanity, are currently ignorant of such origins. Explaining the operation of a system is a crucial first step in explaining the origin of that system. In the case of biochemistry there is a great deal of information available on how various molecules interact with the oft noted plasticity of the biological system(s). Discrete binding is seldom observed in biology and the norm is that many other chemicals may also bind which can in turn generate/suppress a biological response. Acipenser
Acipenser, first my apologies for misspelling your name. I just realized it, and I'll endeavor to avoid that in the future. As a nitpick, in some of your posts you quote me and in some you address me with the same designation, "MI:" so it appears at least in one case that I'm saying something that I'm not. Now to the point. The sequence determines which molecules are necessitated. There is no necessity that is imposed by ATCG that determines the sequence, that necessitates a specific sequence, otherwise every DNA molecule would contain the same sequence. Is the distinction I've made apparent now? By "determines" I mean "necessitates." That's why I used the language analogy, which is apt. The alphabet does not determine the paragraph, that is, the alphabet does not necessitate the sequence of characters in the paragraph. Rather, the sequence imposed on the paragraph determines, or necessitates, the letters that are used. Is the distinction clearer now? I hope so. Do you have any other disagreements with my original point, that explaining the operation of a system does not explain its origin? material.infantacy
MI: Do the molecules determine the sequence? If not, then the above statement is accurate. Of course the molecules determine the sequence. if you substitute one molecule for another you have a different sequence. The sequence of the molecule determines the specificity through the creation of unique sterochemical environments. I think if I were you I would try to drop the agency language, e.g., bases having a say. The bases don't say where they should be anymore than a oxygen molecule says where it should be, hemoglobin, atmosphere, or rust. However, the sequence of bases determines which amino acids are incorporated into a growing protein chain. How the sequences arose is certainly the subject of much research and speculation but there is nothing mystical in the sterochemical interactions of ligands with biological polymers. Yes, I disagree with your original point as it applies to biology. Acipenser
gpuccio, as a MD I would expect that you would be familiar with ligand binding concepts, e.g., allosteric effectors, and how they may influence binding. Acipenser
Expression of the genetic code is dependent on ligand binding at all steps of the process. Allosteric binding of ligands influences protein/RNA 3-D configuration which, in turn, influences specificity of an active site for a different ligand, e.g., tRNA and amino acid specificity. The example of hemoglobin demonstrates a simple allosteric change in a protein due to ligand binding. In the case of hemoglobin (well most vertebrate but not all vertebrate Hb's) binding one oxygen promotes the binding of a second oxygen molecule, then a third, then a fourth due to changes in the structure of the hemoglobin molecule as a result of ligand binding. The binding of protons and carbon dioxide also change the shape of the hemoglobin molecule resulting in decreased affinity for oxygen.....this is a well known system and I'm surprised that someone with a MD would not recognize the importance of ligand binding and the subsequent changes in 3-D configuration in biological polymers. The question on the table was "why does one sequence of codons result in serine versus threonine being bound and incorporated into a growing protein chain." The answer is in the sterochemical nature of each codon and it's interaction with the tRNA molecule. Its been stated many times on this site that the binding of the amino acid and the codon are not connected but this ignores the very real phenomena of allosteric 2-D changes which result in binding of something at one end of the chain and the affinity/binding of another ligand at a distant part of the chain. I'm surprised that you don't recognize this. Acipenser
"The problem for a designer is one of knowledge. How does a designer accumulate the knowledge know what sequences are functional, when the possible combinations of a single gene exceed the number of particles in the universe? Where is the knowledge stored? How is it accessed?"
To borrow an example from John Lennox, we take a ten word sentence: “This ten word sentence is an example of specified complexity.” Now there are 10! ways to arrange the words in that sentence, or 3,628,800 permutations. Of those potential sequences, I think it’s pretty clear that a diminutive number are grammatically meaningful. I’ll expand on that example. Given a 10,000 word vocabulary, a 1000 word essay would exist in a sequence space of (10^5)^(10^3) or 10^5000. Making the assumption, for the sake of the example, that there is a geometric 50% of meaningful sequences, out of 10^5000 possibilities we would have 10^2500 meaningful configurations. Now with that number, we’ve long since exhausted the storage capacity of the universe, by so many orders of magnitude as to be completely unfathomable, many times over. Yet somehow we can determine the meaningful sequences. However this is mathematically intractable. Where are all those sequences stored? The answer, of course, is that they are not stored. They’re determined. We determine them using logic, reason, and the rules of grammar. Such could very well be the case with functional protein sequences -- that there’s a logic, a grammar, to be understood in the laws of chemistry, that determine functional sequences such that no storage is required beyond the expression of them.
The problem from a design standpoint is that nature has far greater resources for generating and testing novel sequences. Now either functional space is such that it can be traversed incrementally, or it isn’t. If it isn’t, then it is inaccessible to designers as well as to evolution.
This is not obviously the case. Now if functional sequence space is not such that it can be traversed incrementally, then it is inaccessible to evolution -- and tremendous genius is implicated. material.infantacy
Hi Acipencer, if I haven't said anything different than what you stated, then why the reaction to my point, "The molecules ATCG apparently have nothing to say about sequence specificity."? Do the molecules determine the sequence? If not, then the above statement is accurate. No, I do not disagree that the system dictates how the sequence is translated and transcribed. What I'm taking issue with is whether the system can be said to determine the sequence of nucleotides, or more specifically, that the bases themselves have anything to say about the sequence they specify. My language analogy is apt. Do you disagree with my original point, where I insisted that a system imposes the rules by which the symbols are interpreted:
...the mapping of any byte-sized transistor state to a symbol is a function of the microelectronics architecture and the software that programs it. It’s purely physical. The operation of the system requires no explanation apart from the laws of physics — it can all be explained by necessity. However invoking necessity as a cause for its origin is self-evidently problematic.
material.infantacy
Acipenser, I'm not offering a computer analogy. I'm talking about actual computers and other electronic devices that rely on messages. Every interaction between memory, the processor, the storage, and the display can be reduced to known natural processes, most notably the behavior of electricity. By attempting to frame my question as an analogy and then implying that I don't know what I'm talking about, you're dodging the question. You seem to have this strange opinion that a thing cannot be a symbol if any natural laws are involved in its processing or interaction. I have a hunch that you'll apply the rule in one case when it suits you, but not when it doesn't. Or you'll further complicate your "rule" to distinguish between the two in which case I'll just find another example for you, or unless it's just question-begging in which case I'll point that out. I'm not asking you a difficult question. I'm asking you to clarify this line you've drawn between what is a symbol and what isn't to see if you're just making it up ad hoc. ScottAndrews2
Acipenser: I don't understand your posts. What has the binding of ligands to do with the genetic code? Please, explain. What has the biochemical behaviour of hemoglobin to do with the symbolic information present in protein coding gene? I think you are really confused. But please, explain better your thought. And in detail, if possible. gpuccio
MI: The letters, or strings of letters, do not reflect the sterochemical environments contained in any specific sequence. The letters are human constructs and are limited in their application and understanding of why things bind to other things. How is what I said any different than what you stated? The specific sequences of bases create unique sterochemical environments which determines that sequence's interaction with other molecules. It is the unique sterochemical environment that dictates what that sequence will bind to as well as what will bind to it, and with what affinity. Do you disagree? Acipenser
Upr, certainly you are as you're opening comment on this thread demonstrates. I would be glad to address the issue you raised once you've addressed my questions, which were clearly on the table prior to your arrival on this portion of this thread. I stated as much previously. Any delay in my response is a direct reflection of your unwillingness to address the questions I posed. It isn't difficult to understand why you're unwilling to address the issues I posed. Acipenser
Acipencer, how does the alphabet determine the paragraph you just wrote? It doesn't. That's the obvious point. The letters don't determine the sequence -- the sequence determines which letters are used. I thought it was obvious. It's not the bases which determine the product, it's the sequence. That you can map the bases to a given protein is not to say that the bases determine the protein. The sequence of the bases is the language that determines the product. That you can map letters of an alphabet to a meaningful sentence does not mean you've explained the sentence via the alphabet. Sequence is sovereign. Long live Sequence. material.infantacy
But Aci, I am not retreating to a computer analogy. I am pointing to the observable physical evidence entailed by the transfer of information - that evidence which you will not address. cheers Upright BiPed
Sure, Upright, as it is your right to continue not to answer my questions. As I stated ignoring the obvious flaws in the computer-biology analogy is common with ID proponents. I'd say you and the other ID proponents on this thread are avoiding dealing with some very basic questions and one can easily guess why......they are questions for which you have no answers and therefore you don't like them and retreat to the computer analogy regardless of how flawed that analogy is when compared to biological systems. Acipenser
MI: The molecules ATCG apparently have nothing to say about sequence specificity. What? Any combination of those bases produces a unique physico/chemical environment. This is the specificity and the unique charge environments contained within any combination will dictate what that combination will bind to as well as what will bind to it. This is hardly news and is the basis for receptor theory in biology. How these combinations interact with proteins and create varying allosteric changes in the protein molecule active sites facilitating binding of specific ligands is well recognized, e.g., drug-receptor interactions. To ignore these sterochemical interactions is to ignore a very large segment of known biology. Acipenser
Well that's fine. It's your right not to answer. As I said, these are coherent observations of the physical dynamics involved in information transfer, but they are not observations that materialist want to deal with. And they like the question(s) that follows even less. Its much better to ask if commputers bind any ligands. Upright BiPed
The mapping of 00100101 to 37 is electronic; it's also symbolic. Mapping 01100001 to "a" is electronic; it's also symbolic. As a matter of fact, the mapping of any byte-sized transistor state to a symbol is a function of the microelectronics architecture and the software that programs it. It's purely physical. The operation of the system requires no explanation apart from the laws of physics -- it can all be explained by necessity. However invoking necessity as a cause for its origin is self-evidently problematic. The molecules ATCG apparently have nothing to say about sequence specificity. The claim that the sequence of nucleotides in the DNA molecule necessitates proteins is also to say that the proteins it codes for necessitate DNA. It's the circularity that is glaringly troublesome. Nobody can explain how the system was bootstrapped. That an external cause may have been necessary is apparent. To claim, ahead of empirical verification, that it all happened by natural law -- with no such law described -- is begging the question. To describe the operation of a thing does not explain its origin; the two are in entirely different categories. material.infantacy
Upright, Once you've addressed my questions I'd be happy to address yours. I'd like to think that you aren't just iognoring them. Quid pro quo and all that polite stuff that conversations are made of. I'm not aware that computers bind any ligands in order to obtain a output response from the operating system or software. Could you give some examples of ligand binding in computers or is it as I stated that the computer analoigy is flawed? Acipenser
Elizabeth Liddle: Could you give an example of an unsupported speculation that a Darwinist has presented in the name of science? Well, there is your own claim of frost being self-replicating:
For example, if you look at frost patterns on a window pain, you are looking at a very simple example of self-replication – a pattern begins, possible because of a speck of dust on the window, and that pattern spawns a copy, which spawns a copy, etc until you have a repeating pattern stretching across the glass. That means that if a very simple “probiont”, consisting perhaps of no more than lipid bubbles going through cycles of enlargement, driven by, for example nothing more complex than convection currents and osmotic forces, you’ve got something that is potentially, I would argue, a “self-designing system”.
Charles
To say that the information transfer in a computer system is a flawed analogy to information transfer in DNA (or any other system of information transfer) - is simply an assertion, made without addressing the common dynamics involved. Aci, since you are well trained on this subject matter, may I ask you the question that all before you have steadfastly ignored? Will you take a minute to consider the evidence, and please answer this question: If on one hand we have a thing that “is a genuine” representation, and on the other hand we have something that “just acts like” a representation, can you look at the physical evidence and tell me the distinction? By addressing the actual physical dynamics involved in information transfer, you will be able to refute the observation. I look forward to reading your response later today. (Scott, my apologies for stepping in on your conversation) Upright BiPed
Scott, instead of dragging a flawed computer analogy into the picture (we can agree that a computer binds no ligands I hope) why don't we stick with the real biological issue of ligand binding as it pertains to allosteric modifications of proteins/enzymes as well as why some ligands bind some receptors and not others. Flawed computer analogies won't shed any light on why molecules interact with biological receptors with differing specificities and affinities and that is the very basis of yoru question. If you don't understand the processes that's fine just admit it. I spent years of my life studying such reactions so I really don't expect a layperson to have a intuitive grasp of the subject which I think comes to light with many of the questions that are raised on this forum. Acipenser
Acipenser, I challenged you to apply your same logic to computers. There's nothing going on there besides predictable reactions, electricity and silicon. Are you prepared to apply your reasoning consistently and state that there are no symbolic codes at use in computers? Can you do it without begging the questions (i.e. computers are designed but DNA isn't?) I'm asserting that you apply your logic capriciously, not consistently, and using a simple example to illustrate it. Do you care to refute my assertion? ScottAndrews2
Scott:Why is there a chemical reason why TCT maps to serine? With many hemoglobin molecules why does binding one oxygen molecule facilitate/enhance the binding of a second, then a third, and finally a fourth oxygen molecule? What cvould possibly be happening here? What could be going on that promotes acetyl cholinesterase to bind acetylcholine as well as diazinon, chlorpyrifos, or any other organophosphate, or carbamate, insecticide instead of paraquat or dioxin? What drives the relative specificity of these bindings? Chemistry perhaps? Acipenser
DrBot: Lets focus on this for a moment – If there is no chemical reason why TCT maps to Serine then how do you make TCT map to Threonine? I must be tired, but I don't understand the question. I think I have been explicit enough. I quote myself: "There is no chemical reason why the sequence of the three nucleotides TCT maps to serine. There is no connection between the chemical nature of T C and T (or their sequence) and the chemical nature of serine. The only reason why TCT maps to serine is because an enzyme is programmed to mount the aminoacid serine on the tRNA with the anticodon for TCT. There is no connection between the chemical propertie of the anticodon and the chemical properties of the aminoacid. It’s the information stored in the enzyme that corresponds to the information stored in the DNA sequence. And to the correct information about the sequence of aminoacids in the desired protein. So, in all senses, TCT (the sequence of three nucleotides, not the letters we use as a symbol of that sequence) is a symbol of the aminoacid serine." What is not clear, or wrong, in that? gpuccio
Obviously gpuccio is not claiming that the process is magic. He has clearly acknowledged that chemical processes are at work. But why? Why is there a chemical reason why TCT maps to serine? Removed from these very specific reactions, is there any other way to get serine from those same molecules? There's no chemical reason for the chemical reason. That is the point. His statement is obviously correct. The mapping is chemical. But there is no chemical reason for the mapping to exist in the first place. I hope no one is pretending not to understand these statements. That would be an even greater waste of everyone's time. ScottAndrews2
Scott: Who said that there is no chemical reason why TCT maps to Serine? gpuccio (5.1.4.3.2: There is no chemical reason why the sequence of the three nucleotides TCT maps to serine. In all of these discussions I never see any mention of the very well known effect of allosteric modification of proteins (and other polymers) by ligand binding. It is a well known and described phenomena that binding various ligands away from the active site of a enzyme can, often dramatically, influence what can be bound to at the active site as well as the influence on the affinity of the ligand for the active site. Indeed, hemoglobin is one, among many, examples of the influence of chemistry on binding/unbinding of spcific ligands. Acipenser
Who said that there is no chemical reason why TCT maps to Serine?
gpuccio:
There is no chemical reason why the sequence of the three nucleotides TCT maps to serine.
--
Has it not been repeated ad infinitum that the process is chemical?
Yes, that is what we have been saying! DrBot
DrBot, Who said that there is no chemical reason why TCT maps to Serine? Has it not been repeated ad infinitum that the process is chemical? But why is there an enzyme that maps one to another? Without that enzyme, what would the relationship be? Are you prepared to assert that different enzymes could not execute alternate mappings? (In doing so you would refute the common argument that the existing configuration of life is one of many possible "targets.") The reason why 01000001 maps to "A" on a computer is purely electronic. Are you prepared to make the same argument, that because a purely electronic process converts 01000001 to "A" that the relationship is not symbolic? If you wish to make the argument in one case then you must follow it to its conclusion and apply the same reasoning elsewhere. But you cannot, and will not. Or you will come up with strained explanation to account for the difference. It then becomes evident that the difference is arbitrary and capricious. You don't mind one being a symbolic code, but you don't want the other to be one. ScottAndrews2
There is no chemical reason why the sequence of the three nucleotides TCT maps to serine.
Lets focus on this for a moment - If there is no chemical reason why TCT maps to Serine then how do you make TCT map to Threonine? DrBot
GinoB: Lucky you... gpuccio
Elizabeth: I can't follow you anynmore. Luckily, KF has said what was to be said. I find starnge that you find so many thimgs, that are quite simple, "misleading". Why misleading? Because they could perhaps lead us to the right inference, that is design? You say: I also do not accept that the mapping of 64 triplet base-pair sequences to 20 amino acids is “SYMBOLIC” not “chemical”. And again you are wrong. The code is symbolic. The implementation of information (the process of writing of information in a gene) is chemical. There is no chemical reason why the sequence of the three nucleotides TCT maps to serine. There is no connection between the chemical nature of T C and T (or their sequence) and the chemical nature of serine. The only reason why TCT maps to serine is because an enzyme is programmed to mount the aminoacid serine on the tRNA with the anticodon for TCT. There is no connection between the chemical propertie of the anticodon and the chemical properties of the aminoacid. It's the information stored in the enzyme that corresponds to the information stored in the DNA sequence. And to the correct information about the sequence of aminoacids in the desired protein. So, in all senses, TCT (the sequence of three nucleotides, not the letters we use as a symbol of that sequence) is a symbol of the aminoacid serine. But really, this is the last time I say it to you. gpuccio
gpuccio
GinoB: Really, that does not even deserve an answer.
You mean you don't have an answer for the points raised. But I understand. GinoB
GinoB: Really, that does not even deserve an answer. gpuccio
It’s just not an abstract code.
Gino, can you do me a favor. I have asked this question a number of times and no one can give me an answer. Will you please answer this question: If on one hand we have a thing that "is a genuine" representation, and on the other hand we have something that "just acts like" a representation, can you look at the physical evidence and tell me the distinction? Thanks. I'll be off for most of the day, but I'll try to check back in. Upright BiPed
Der Liddle Please. This is beginning to sound like world play games. To see is a well known metaphor for to understand -- probably strongly related to our dominant sense, sight. To be blind then is to not understand. A search approach that is not guided by intelligence and relevant information -- by understanding -- is blind. Per the observed basic classes of causal factors, such a search relies on chance processes and mechanical necessity without intelligent direction, and it does a random walk in a config space until it picks up a tend which it can then exploit by using Mt Improbable hill climbing. (And see I have had to add the Mt Improbable part because of previous verbal games, when it should have been quite clear what hill climbing as a general term means.) That is plainly not circular, it is empirically based and reasonable. Have you ever played blind search games? Stumbling about blindly is one thing, being guided by verbal cues from one who knows and sees makes a big difference. Even treasure search where one does not have any clues is a blind search. "Warmer . . . colder" signals allow rapid convergence precisely because of added intelligent direction, i.e. design. There should be no need to elaborate on such simple, common sense points. Something is wrong, and I do not want to get specific or overly frank, lest I be accused of being rude ending in derailment of the thread. But surely, we can do a LOT better than this. GEM of TKI kairosfocus
gpuccio
And that no known not-designed object exhibits it (equally easy to demonstrate).
Please demonstrate it then. Don't forget to include your method for determining that an unknown object is not-designed first without using your 'dFSCI' metric. Otherwise you're guilty of completely circular logic. "This object has lots of dFSCI, so it must be designed! How do we know it's designed? Because it has lots of dFSCI!" ...round and round you chase your tail. GinoB
The Digital Code of DNA - 2003 - Leroy Hood & David Galas Excerpt: The discovery of the structure of DNA transformed biology profoundly, catalysing the sequencing of the human genome and engendering a new view of biology as an information science. http://www.nature.com/nature/journal/v421/n6921/full/nature01410.html bornagain77
gpuccio
The genetic code is not base 4? Protein coding genes code for proteins, you know. The information is coded in base 4. Why do you deny that simple concept?
No. The human designed system to represent the complicated chemical reactions of life is base 4. Digitizing an analog signal to record it doesn't magically make the original signal digital. Pretty much everyone on the planet understands that the map is not the territory, except ID supporters is seems. GinoB
A search that is not intelligently directed is by definition blind.
In that case your argument is completely circular.
...the issue is not to adapt an existing solution to something close by in config space, but to get to the original solution, i.e to the shores of the island of function.
Why is "the issue...not to adapt an existing solution to something close in config space"? It seems to me that's exactly what the issue is. What do you mean by "the original solution"? If you mean the simplest possible self-replicator, then, sure, but clearly Darwinian evolution is not invoked to account for the necessary conditions for Darwinian evolution! But once started, Darwinian evolution is a perfectly good method for finding novel solutions, which is why we actually use them. Elizabeth Liddle
ES: Prezactly. kairosfocus
Dr Liddle: A search that is not intelligently directed is by definition blind. And the point where you speak of searching within an island of existing function, is exactly the key point highlighted by design theory, the issue is not to adapt an existing solution to something close by in config space, but to get to the original solution, i.e to the shores of the island of function. Until you find initial function, if you are not intelligently configuring elements towards intelligently identified forms and arrangements that will credibly work, you are forced to undertake blind search without feedbalc across the vast majority of the config space. This starts first of all in the warm little pond or equivalent, and pardon but until one shows an OBSERVED path to metabolism plus symbolic replication, then one has nothing. The tree of life icon that so rules the minds of many, has no root. Then, when one has first got some function, one needs to show how incremental stepwise changes, that must all be functional, can move from initial body plan to more complex ones with specialised organs etc. And, this has to be embryologically feasible, where there are multiple complex components that must all fit and work together in collaboration the FIRST time, or the body plan fails to develop from the initial zygote or equivalent. That characteristic wiring diagram integrated complexity is the direct reason to see islands and archipelagos of function in large seas of non-functional configs. Something like the bird wing, feathers and controls plus power systems plus specialised lungs highlighted by co-founder of modern evolutionary theorising, Wallace, in support of his intelligent evolution view, is a good illustrative example. the co-adaptations to change a bear or a cat or a hippo etc into a whale, are similar. The eyes are a similar case. And more. Neither root nor main branches of the so often portrayed tree of life are to be seen in the fossil record, nor the lab today. The evidence points strongly to islands of function, regardless of the demand of the theory established by a priori materialism, that there MUST be smoothly graded pathways form the root to the branching body plans. (Notice the issue of the trade secret of paleontology.) Those islands, from the dna we observe, require about 100,000 - 1 mn bits of info for the first cell plan, and onward 10 - 100 million plus for the main body plans for multicellular organisms. That there is a smoothly graded path from unicellular to multicellular organisms on the various body plans, or that the blind -- non-intelligent -- searches of relevant config spaces required to effect this can be done without intelligence (the ONLY observed source of FSCI) is an a priori demand of a theory accepted as the implication of a worldview, not something that has been warranted empirically. That is why Johnson in reply to Lewontin et al is so stinging:
For scientific materialists the materialism comes first; the science comes thereafter. [[Emphasis original] We might more accurately term them "materialists employing science." And if materialism is true, then some materialistic theory of evolution has to be true simply as a matter of logical deduction, regardless of the evidence. That theory will necessarily be at least roughly like neo-Darwinism, in that it will have to involve some combination of random changes and law-like processes capable of producing complicated organisms that (in Dawkins’ words) "give the appearance of having been designed for a purpose." . . . . The debate about creation and evolution is not deadlocked . . . Biblical literalism is not the issue. The issue is whether materialism and rationality are the same thing. Darwinism is based on an a priori commitment to materialism, not on a philosophically neutral assessment of the evidence. Separate the philosophy from the science, and the proud tower collapses. [[Emphasis added.] [[The Unraveling of Scientific Materialism, First Things, 77 (Nov. 1997), pp. 22 – 25.]
Pardon, finally, but this sort of over and over again definitionitis game gets old very fast. GEM of TKI kairosfocus
Comments like 7.1.1.1.2 about blind search being good stem from lack of understanding of intractability of life and of biofunction not being defined over large areas in configuration spaces. Blind 'trial and error' search is no good for anything of real engineering importance. Eugene S
Petrushka: Why should it be necessary to move away from "the materials and forces of nature" that are being used to implement the digital system, to show that it is a digital system? ALL digital systems -- all engineered systems -- are implemented using "the materials and forces of nature." ABET:
ENGINEERING is the profession in which a knowledge of the mathematical and natural sciences gained by study, experience, and practice is applied with judgment to develop ways to utilize economically the materials and forces of nature for the benefit of mankind.
Why not work through the response to Dr Liddle just above, and in particular focus on the way that tRNA, the crucial specifying element in the AA chaining system, works? Notice, the standard CCA coupler on the end opposite to the aticodon? If you object to the use of key-lock fitting and bumps/drops in a digital code [codon-anticodon fit], please observe that Braille, used with blind people, is exactly a 6-bit digital code based on bump/no bump in a physically organised array in a physical medium such as paper. And, how ha4rd this is to do, joined to how fruitless a trial and error search approach would be, is a clue to how elegantly knowledgeable and skilled the design is. Do you think that illiterate primitives would be impressed if they were to come across a motherboard, maybe with the chip covering blown off, and see the square of silicon and the plastic, wires and solder? Surely, such a complex thing is very hard to do, and so how could someone do it apart from trial and error that would most likely fail? Of course the answer is trillions invested across centuries to develop and build up then propagate the science and create the cluster of required industries. Now, move this all to the next level where we are dealing with sophisticated molecular nanotech. Does that help you see more clearly? GEM of TKI kairosfocus
Blind search is far less powerful than heuristically (intelligently) guided search. You can think of blind search as of a heuristic search with a very poor heuristic (something like a zero knowledge default). I am not saying that in all lab experiments they must be using heuristic guidance but that is a possibility.
What do you mean by "blind search"? Evolution is not "blind search" in a very important sense - it "searches" where it has already "found" solutions. It does not "blindly" grope anywhere in the solution space at random. That's why it's such a good search method for fitness landscapes in which good solutions are clustered in neighbourhoods, as is the case in biology (and in many engineering contexts too). Elizabeth Liddle
kf, of course I am not denying the genetic code! I'm saying that "base 4 digital" is extremely misleading description of that code. I don't care what wiki says - I have presented my reasoning. If you have problems with my reasoning, please articulate what the problem is. I also do not accept that the mapping of 64 triplet base-pair sequences to 20 amino acids is "SYMBOLIC" not "chemical". What's not chemical about it? Elizabeth Liddle
Dr Liddle: That DNA is a physical implementation of digital, discrete state, base-4 coding is not in serious doubt. Let's clip Wiki speaking against interest:
The genetic code is the set of rules by which information encoded in genetic material (DNA or mRNA sequences) is translated into proteins (amino acid sequences) by living cells. The code defines how sequences of three nucleotides, called codons, specify which amino acid will be added next during protein synthesis. With some exceptions,[1] a three-nucleotide codon in a nucleic acid sequence specifies a single amino acid. Because the vast majority of genes are encoded with exactly the same code (see the RNA codon table), this particular code is often referred to as the canonical or standard genetic code, or simply the genetic code, though in fact there are many variant codes. For example, protein synthesis in human mitochondria relies on a genetic code that differs from the standard genetic code. Not all genetic information is stored using the genetic code. All organisms' DNA contains regulatory sequences, intergenic segments, chromosomal structural areas, and other non-coding DNA that can contribute greatly to phenotype. Those elements operate under sets of rules that are distinct from the codon-to-amino acid paradigm underlying the genetic code . . . . After the structure of DNA was discovered by James Watson and Francis Crick, who used the experimental evidence of Maurice Wilkins and Rosalind Franklin (among others), serious efforts to understand the nature of the encoding of proteins began. George Gamow [--> the Russian-American Astronomer] postulated that a three-letter code must be employed to encode the 20 standard amino acids used by living cells to encode proteins. With four different nucleotides, a code of 2 nucleotides could only code for a maximum of 4^2 or 16 amino acids. A code of 3 nucleotides could code for a maximum of 4^3 or 64 amino acids.[2] The fact that codons consist of three DNA bases was first demonstrated in the Crick, Brenner et al. experiment. The first elucidation of a codon was done by Marshall Nirenberg and Heinrich J. Matthaei in 1961 at the National Institutes of Health. They used a cell-free system to translate a poly-uracil RNA sequence (i.e., UUUUU...) and discovered that the polypeptide that they had synthesized consisted of only the amino acid phenylalanine. They thereby deduced that the codon UUU specified the amino acid phenylalanine. This was followed by experiments in the laboratory of Severo Ochoa demonstrating that the poly-adenine RNA sequence (AAAAA...) coded for the polypeptide poly-lysine[3] and that the poly-cytosine RNA sequence (CCCCC...) coded for the polypeptide poly-proline.[4] Therefore the codon AAA specified the amino acid lysine, and the codon CCC specified the amino acid proline. Using different copolymers most of the remaining codons were then determined. Extending this work, Nirenberg and Philip Leder revealed the triplet nature of the genetic code and allowed the codons of the standard genetic code to be deciphered. In these experiments, various combinations of mRNA were passed through a filter that contained ribosomes, the components of cells that translate RNA into protein. Unique triplets promoted the binding of specific tRNAs to the ribosome. Leder and Nirenberg were able to determine the sequences of 54 out of 64 codons in their experiments.[5] Subsequent work by Har Gobind Khorana identified the rest of the genetic code. Shortly after, Robert W. Holley determined the structure of transfer RNA (tRNA), the adapter molecule that facilitates the process of translating RNA into protein. This work was based upon earlier studies by Severo Ochoa, who received the Nobel prize in 1959 for his work on the enzymology of RNA synthesis.[6] In 1968, Khorana, Holley and Nirenberg received the Nobel Prize in Physiology or Medicine for their work.[7] . . . . The genome of an organism is inscribed in DNA, or, in the case of some viruses, RNA. The portion of the genome that codes for a protein or an RNA is called a gene. Those genes that code for proteins are composed of tri-nucleotide units called codons, each coding for a single amino acid. Each nucleotide sub-unit consists of a phosphate, a deoxyribose sugar [--> the sugar-phosphate chaining backbone], and one of the four nitrogenous nucleobases [--> the info storing "side-branch"] . . . . Each protein-coding gene is transcribed into a molecule of the related polymer RNA. In prokaryotes, this RNA functions as messenger RNA or mRNA; in eukaryotes, the transcript needs to be processed to produce a mature mRNA. The mRNA is, in turn, translated on the ribosome into an amino acid chain or polypeptide.[8]:Chp 12 The process of translation requires transfer RNAs specific for individual amino acids with the amino acids covalently attached to them, guanosine triphosphate as an energy source, and a number of translation factors. tRNAs have anticodons complementary to the codons in mRNA and can be "charged" covalently with amino acids at their 3' terminal CCA ends. Individual tRNAs are charged with specific amino acids by enzymes known as aminoacyl tRNA synthetases, which have high specificity for both their cognate amino acids and tRNAs. The high specificity of these enzymes is a major reason why the fidelity of protein translation is maintained.[8]:464–469 There are 4^³ = 64 different codon combinations possible with a triplet codon of three nucleotides; all 64 codons are assigned for either amino acids or stop signals during translation. If, for example, an RNA sequence UUUAAACCC is considered and the reading frame starts with the first U (by convention, 5' to 3'), there are three codons, namely, UUU, AAA, and CCC, each of which specifies one amino acid. This RNA sequence will be translated into an amino acid sequence, three amino acids long.[8]:521–539 A given amino acid may be encoded by between one and six different codon sequences. A comparison may be made with computer science, where the codon is similar to a word, which is the standard "chunk" for handling data (like one amino acid of a protein), and a nucleotide is similar to a bit, in that it is the smallest unit . . .
Notice, this work was rewarded with a Noble prize over forty years ago. That is how long ago, it was not only no longer a matter of significant dispute, but a celebrated achievement of science, that DNA was understood to be an informational macromolecule, carrying coded information. Observe, how it was PREDICTED that, to code for 20 AA's, you would need a triplet coding scheme, as 4^3 = 64, whilst 4^2 = 16. DNA as a code based linear macromolecule acting as a physical basis for a string data structure with 4-state digital elements, is not a matter of serious dispute. What is therefore significant is why there are at UD those who so hotly dispute this long since well documented and commonly accepted reality. It cannot be for want of familiarity with basic facts, as these are easily accessible and have been taught in schools from grade or secondary level for decades, as well as being all over the media and Internet. The answer, to be direct, is plainly ideological, as the strong, sharply reactive objection to easily confirmed terms like "digital" and "code" -- notice Dr Bot's preference above to substitute the less familiar term, "discrete" -- reflect. Digital MEANS discrete state, as Wiki also conveniently documents [I confess, I here feel like one having to "prove" what "ABC . . ." means to a literate person]:
A digital system[1] is a data technology that uses discrete (discontinuous) values. By contrast, analog (non-digital) systems use a continuous range of values to represent information. Although digital representations are discrete, they can be used to carry either discrete information, such as numbers, letters or other individual symbols, or approximations of continuous information, such as sounds, images, and other measurements of continuous systems. The word digital comes from the same source as the word digit and digitus (the Latin word for finger), as fingers are used for discrete counting. It is most commonly used in computing and electronics, especially where real-world information is converted to a digital format as in digital audio and digital photography . . .
Similarly, Wiki tells us that:
A code is a rule for converting a piece of information (for example, a letter, word, phrase, or gesture) into another form or representation (one sign into another sign), not necessarily of the same type. In communications and information processing, encoding is the process by which information from a source is converted into symbols to be communicated. Decoding is the reverse process, converting these code symbols back into information understandable by a receiver. One reason for coding is to enable communication in places where ordinary spoken or written language is difficult or impossible. For example, semaphore, where the configuration of flags held signaller or the arms of a semaphore tower encodes parts of the message, typically individual letters and numbers. Another person standing a great distance away can interpret the flags and reproduce the words sent.
if it were not so sad, I would be amused by the objeciton in your last paragraph:
please do not belittle the perfectly good argument that DNA is not “digital base 4? in any useful sense. My view is that it’s a useless model, because the bases are not switched. The system is alphabetic not digital (which should be equally “worrying” to me, on your logic, but isn’t, and therefore undermines the argument that we are running scared from the implications of “digital base 4?.)
The alphanumeric system used to communicate in Written English, FYI is precisely a digital system of discrete glyphs, one that is then translated into one of several common binary digital codes, e.g. the 7- or 8-bit [parity check] ASCII code. The chain of symbols in DNA, FYFI, can be reframed to code for different proteins, and this has apparently been observed. That is, the symbols may function differently by changing the framing -- a high art of machine code design that I have never even TRIED to do, as I came along in the days when we had big enough EPROMs. Thank God for the good old 2716! The evidence is more than compelling, to all save those who are ideologically committed otherwise. DNA is an informational macromolecule used in the heart of the cell that physically instantiates a base-4, digital, discrete state string data structure containing prescriptive information, especially protein codes and regulatory information. As noted with reference to the charging of tRNAs, the system is also highly specific, where the COOH end of the AA is locked to a standard tool-tip on the tRNA, based on its configuration. (It is chemically possible to force a false charging of any tRNA because of that universal coupler system.) The AA-carrier tool tip and the codon-matching anticodon are at opposite ends of the tRNA, and so we see how the transfer from RNA to emerging protein -- a translation process that is also diagnostic of a code in action: this is mapping, from a 64 states system to a 20 states one [with some key exceptions] -- is SYMBOLIC, not a matter of blind chemical forces. And BTW, the system is subject to reprogramming, and I gather experimenters have recently reconfigured to code for different sequencings. So, the matter is plain. GEM of TKI kairosfocus
Petrushka, I see the point and almost agree with you. The problem that you seem to be skipping over is that of initial conditions. I don't know any details about how they usually conduct search for biofunction in the lab. But here is what I think. 1. The experiments for reasons of intractability must necessarily start close enough to something functional. 2. Blind search is far less powerful than heuristically (intelligently) guided search. You can think of blind search as of a heuristic search with a very poor heuristic (something like a zero knowledge default). I am not saying that in all lab experiments they must be using heuristic guidance but that is a possibility. So it is not exactly evolution is the lab. Call it directed (micro)-evolution or intelligent parameter setting. The point is where we start searching for function. I am an engineer and what KF and GP are saying makes a lot of sense to me. While I have no problem in admitting evolution as a possibility in principle (for things like adaptation), inherent monumental intractability of life is a serious consideration againts neo-Darwinism on a grand scale. Eugene S
No, gpuccio. Clearly hexadecimal numbers are digital, incorporate place value, and any digit can be switched between one of sixteen states. Polynucleotide base pairs sequences have no "place value", and "switching" a base pair to take a different state (if we call replacing it in a copy, "switching" it), is only one of many ways in which the sequence is modified - insertion, deletion and duplication are just as important. Moreoever, polynucleotide base-pair replacements are not part (that I know of) of healthy organismic function, but rather something that happens during reproduction. During the life of an organism, we hope that our DNA molecules stay pretty much the same as the one we started off with. They don't always, of course, which is why we get cancer. What does happen, however, during the life of an organism, indeed repeatedly I am typing this, is that genes are switched between "off" and "on" states. In that sense they are "digital", but binary, and still not a system with place-value. On the other hand DNA is quite like an alphabetic system in that it can be parsed into three part "letters" (each consisting of a base pair triplet) which in term form combinations that "spell" a specific protein, in something of the way that roman letters "spell" a specific word. However, even here the analogy breaks down, because whereas a DNA sequence, under certainly conditions, triggers a sequence of chemical processes that result in the synthesis of a physical object (the coded protein) the letter sequence "JUSTICE" or even "CONCRETE" merely evoke in a pair of people the same shared concept, and neither justice nor concrete are synthesised by the writing of the word. That is certainly not to say that information is not an important concept when considering genetics, but to say "heere bee Dragones" - or rather acute risk of inadvertent equivocation! Elizabeth Liddle
material.infantacy: I suppose it is because, if they admit the concept (and indeed, many reserachers now do exactly that) they are in a mess with their theory. But the concept is strong, beautiful and simple. You define a function. You compute the minimun number of bits necessary to express that function. You evaluate if the random system you are considering could reasonably produce that result, alone. Or is there are explicit necessity algorithms that can do the same, either alone or associated to the random system. It is not so difficult. (Mind, all who read: this is not the rigorous definition, just a generic description). gpuccio
Elizabeth: "The system is alphabetic not digital " !!!!! Are you serious? Then hexadecimal numbers are bastard? gpuccio
Elizabeth: What do you mean? Sometimes I really can't understand what you mean!!! The genetic code is not base 4? Protein coding genes code for proteins, you know. The information is coded in base 4. Why do you deny that simple concept? gpuccio
KF: You are definitely better than me at that! :) gpuccio
Dr Liddle: Let's ask: have you ever done coding at machine or near machine level (i.e assembly)? Do you have practical knowledge of the difference between code adapted to abstract info processing process, and code adapted to the specifics of machine implementation? I do. Machine code is the term given for object code that is specifically adapted to the details of a given machine. Source code, by contrast is abstracted from those specifics. The code we see in DNA is precisely adapted to the details of the machine, and is an example of such. Cf the process of protein synthesis to see why I say that. So, no, I am not just making empty wishful assertions that are suspect and need to be proved. I am speaking as an experienced designer and programmer at assembly/machine levels. The idea of a DNA compiler, would be where there is an analogy if anything, as we have certainly found it VERY advantageous to use abstracted languages adapted to the needs of the problem. And, somewhere down the line we WILL develop a DNA compiler -- and a DNA decompiler to move from object code to some version of a source code. That BTW, is part of what I have in mind when I speak of how Venter has given us proof of concept of intelligent design of organisms, and that we need to move some number of generations down the road. Where, a molecular nanotech lab several generations beyond Venter could design and implement C-chemistry, cell based life, as a sufficient cause. GEM of TKI kairosfocus
Nope. What mean is that sequences cannot be translated into folds except by doing the chemistry. One can emulate the chemistry (as in Folding@Home), but this is monumentally difficult and there appear to be no shortcuts. The problem for a designer is one of knowledge. How does a designer accumulate the knowledge know what sequences are functional, when the possible combinations of a single gene exceed the number of particles in the universe? Where is the knowledge stored? How is it accessed? Most engineering problems can be fixed by research and development. Rockets were invented at least a thousand years before they became practical for transportation. But the problem of cellular automata and (I think) the analogous problem of protein folding appear to be mathematically intractable. What human engineers do when experimenting with novel sequences is generate lots of sequences and select those that produce desirable folds. From those, a tiny subset may have some minimal function. So what is being done is evolution is the laboratory. The problem from a design standpoint is that nature has far greater resources for generating and testing novel sequences. Now either functional space is such that it can be traversed incrementally, or it isn't. If it isn't, then it is inaccessible to designers as well as to evolution. Petrushka
Interpretations are bound to be contextual. What is meaningful in one context, may not be meaningful in others. Is that what you mean by "independently from the processes of chemistry"? Eugene S
I have that impression as well. If I am not mistaken, Kauffman hypothesises e.g. that some sort of life was bound to emerge sooner of later in our universe. As far as I understand what he says, he believes it is possible via total co-evolution. Eugene S
The code physically instantiated in DNA is machine code, object code.
Please support this assertion. Elizabeth Liddle
At some point our darwinist interlocutors, or at least some of them, seem to become more or less aware that some of the things ID says are worrying.
No, gpuccio, this is not the case There is nothing intrinsically "worrying" about the idea that DNA is a "code" or that it is "digital". As I've said, in some respects DNA does act as a "digital" "code" only it is in binary not base 4 - genes are switched between "off" or "on" states. Nobody "worries" about this - the discovery was made by "darwinists" and forms a major plank in modern evolutionary biology, specifically the branch called "evo devo". So please do not belittle the perfectly good argument that DNA is not "digital base 4" in any useful sense. My view is that it's a useless model, because the bases are not switched. The system is alphabetic not digital (which should be equally "worrying" to me, on your logic, but isn't, and therefore undermines the argument that we are running scared from the implications of "digital base 4".) Elizabeth Liddle
If this is not evidence of design, nothing is. This is not evidence of design. Ergo... ! Chas D
The main problem with design is not whether DNA is analog or digital, but whether, in principle, a DNA sequence can by interpreted independently from the processes of chemistry. I would like to see a design advocate demonstrate (even as a thought experiment) that one can anticipate the biological implementation of a coding or regulatory sequence without using trial and selection. I'm thinking some things in life resemble cellular automata in that one can only see the results of even a simple code except by running it. Petrushka
HI GP, regarding your f), there seems to be an assumption on the part of some interlocutors that any contingent sequence in DNA is potentially functional, given the right combination of organism and environmental conditions. I could be wrong about this, but it would help explain why there is sometimes a denial (or doubt) that functionally specified information is an objective concept. material.infantacy
"...that your made up, subjective ‘dFSCI’ metric..."
Do you mean that the acronym is made up? Aren’t they all?
'...only intelligently designed things can have large amounts of dFSCI” is a hypothesis,"
It’s an observation, excepting the subject at issue. Do you know of anything empirically established to be the product of necessity, which contains large strings of specified contingency? I trust you know the difference between specified and random contingency, both being uncompressible forms of information.
"...you’re going to have to measure both known designed and known not-designed things."
Computer code is specified and complex. Computer code is designed. Nothing in nature which is explicable by necessity contains specified complexity, or anything representing or analogous to computer code. This is obvious. The informational content of computer code can be assessed, just as the information content of the DNA molecule can. This is inarguable. The specified nature of the information content is inarguable. What’s at issue is whether there is an explicable mechanism born out of the laws of nature which can account for it. This is what’s at issue, not some idiotic objection to the use of terms like dFSCI. "Biology is the study of complicated things that give the appearance of having been designed for a purpose." [Dawkins] The Blind Watchmaker (1996) p.1 Evolution Quotes Francis Crick writes, "Biologists must constantly keep in mind that what they see was not designed, but rather evolved." Detecting Design in the Natural Sciences Now, we observe digitally coded information (dFSCI, or the digital subset of all examples of FUNCTIONALLY SPECIFIC COMPLEX INFORMATION) in both computer systems and in biological systems. If you don't like the dFSCI label, substitute "specified complexity" which has a history predating its use in ID; consider the subset of complex specified information that is digitally coded for function, and make up your own damn expression. The concept is real, so nobody cares if you take issue with the moniker. There’s no evidence you even understand what FSCI entails, no wonder you take issue wth dFSCI. It looks like you're new here. gpuccio is not - not by a stretch. You don't appear to even understand what he's talking about, you just shout naked incredulity at whatever he puts forward. Why don't you try addressing his arguments in a real discussion he has with EL, which begins here. That should establish if you have any grasp on what is actually being argued. material.infantacy
Welcome. kairosfocus
GB: The code physically instantiated in DNA is machine code, object code. (I would LOVE to see the DNA compiler! [Hint: it is NOT going to be molecular accidents filtered by trial and error for the needle in a haystack reasons just deiscussed.]) GEM of TKI kairosfocus
eep, eep, cheep, cheep! dFSCI is: geqghyeqoeghqutg3itghjbgioer There, proved! kairosfocus
GinoB:
. . . your made up, subjective ‘dFSCI’ metric
Have you done information theory at some point, and/or do you use it in your work? Let's start with basics: following Hartley's suggestion, info is measured since Shannon in 1948 in binary digits, i.e bits (this is where the abbreviation was introduced, sorry I go back to the era in which all of this was jargon for a weird field called telecommunications, and an associated one called digital electronics, with a bleed-over into a more exotic field called thermodynamics for which there is a whole informational approach school of thought that has been a controversial school for decades but is now getting much more respect). In effect -- cf my discussion in my always linked briefing note [through my handle], here and onward -- the number of possibilities for a field of configurations and the reasonable or observed distribution of outcomes was used to measure info: I = - log p, in bits if the log is to base_2. This was extended by Shannon to the case of average info per symbol, H. H is also a bridge to thermodynamics, as is now increasingly recognised. Let us consider some functional part, e.g. a car part, similar to the remarks just made to Dr Bot. It needs to be a fairly specific size, shape, and material etc to work. Work/fail -- as you know from say working with a car -- is a fairly objective matter. In engineering terms, there is a specification, with a certain degree of tolerance, that will be acceptable, and outside that range, the part will not work. There is a zone T from which actual cases E will work, and this is a part of a much wider range of possibilities W, where the overwhelming majority will not work. Most possible lumps of say mild steel of the same size of our part, will NOT work as an acceptable part. The concept of an island of acceptable function in a given context naturally emerges. (And so does the concept of an archipelago of related islands of function, e.g a similar part will work in other engines for different cars, but usually parts are not freely substitutable. Function is context-specific as a rule. Hence also the concept that for a multi-part entity where several well-matched parts have to work together just right to get a particular overall function, each being necessary and the core cluster being jointly sufficient, we have irreducible complexity of the function.) Without loss of generality [WLOG] all of this can be reduced to digital information, by imposing a structured set of yes/no decisions in the context of a mesh of nodes and arcs [which for a multi part system like a car engine, is hierarchical, i.e the nodes of a mesh at one level, can be expanded into meshes in turn, etc, leading to the classical exploded, "wiring" diagram so useful in assembly of a system like that]. In effect that is what a CAD package like AutoCAD does. That structured set of yes/no decisions gives us a natural measure of information in binary digits, or bits. In that context, digitally coded, functionally specific, complex information is quite meaningful. However, there is another context, in which the digital info is directly present. In text like this posted comment, we are using a set of glyphs that form a set of symbols, typically represented as ASCII, 7-bit digital code. A s-t-r-i-n-g of such symbols is also a natural structure, as you just saw. Similarly, for acceptable, intelligible text in say English or Italian, not gibberish, certain rules need to be pretty fairly adhered to. Some degree of tolerance may be there for typos and errors of grammar, but not that much, certainly not much compared tot he field of possibilities for a string of a given length, where each member of the string may take up 128 possibilities. The number of possibilities for a string of n elements is 128^n for ASCII characters, i.e. things run up very fast indeed. Prescriptive info, i.e step by step instructions for acts to be carried out, are very similar, and are familiar form computer programs, including these days markup for display e.g. HTML tags like those you see below the box where you type in a comment. Procedural languages extend this to all sorts of things, and that leads to the concept of a bug, whereby we see, again that there may be a cluster of acceptable configs, but the vast majority of possibilities are not going to work. We are right back to the concept of digitally coded, functionally specific, complex info. Complexity is obviously a function of the number of possibilities, and can be measured in various ways. A convenient way is to compare the number of possibilities for a given string of bits, usually 500 or 1,000 as threshold, with the number of possible Planck time quantum states [PTQS's]of the atoms of our solar system or the observed cosmos since their reasonable time of formation, or the like. (Cf a recent peer-reviewed discussion here.) In effect, the 10^57 atoms of our solar system, in 10^17s since formation, would have up to 10^102 PTQS's. This is 1 in 10^48 of the set of possibilities for 500 bits. Or, in familiar terms, you are taking a 1-straw sized sample from a field of possibilities equivalent to a cubical hay-bale 3 1/2 light-days across, the distance light would travel in that much time at 186,000 miles/s. A whole solar system could be lurking in that bale, and still sampling theory will tell you that you only have a right to expect to get what is TYPICAL, straw, not what is atypical. With 1,000 bits, it is much worse. Millions of universes the size of our observed universe could be lurking in a bale of the resulting size, and a 1-straw sample would even more overwhelmingly only reasonably would come up straw. There is one known exception to this pattern: where the sample is intelligently directed, i.e someone knows where to look to get needle not hay. So, despite the dismissive nonsense and vituperation -- some have come over to UD to toss out assertions like "fake," and I have no doubt that where they do not have to keep a civil tongue in their heads, it is much, much worse -- that you will see out there in the circle of ill-informed but angry attack sites, the basic concept of a metric of when such a search on blind chance and mechanical necessity will be credibly hopeless, is very useful and reasonable indeed. For reasons that are very close to the statistical foundations of the second law of thermodynamics. The discussion here shows a way to reduce, simplify and apply Dembski's metric in light of the above, based on several months of discussion here at UD with an earlier attempt to discredit the CSI concept and its metrics. Stating: Chi_500 = I*S - 500, in bits beyond the solar system threshold I is an info metric that is relevant, whether I = - log p or the like, or even a direct estimate based on the nature of an inherently digital situation [as Shannon also used]. S is a dummy variable that is 1/0 according as on reasonable grounds the object in question is highly specific or may take up any config it pleases. 501 coins tossed at random and arranged in a string will take up any particular value, most likely one near 50-50 distribution, and will be complex but nor specific. 501 coins have 501 bits of info storing capacity but under the circumstances S = 0, and so Chi_500 = -500. If the same coins are seen to be arranged in accord with the ASCII code for a statement in English, then that specification on function shifts matters dramatically. S = 1, and here we now see Chi_500 = 1, and the best explanation is the obvious one: design. The just linked discusses several biological cases, based on Durston et al and their recent peer-reviewed work on 35 protein families. Of ocurse, it is possible to program a computer to do teh equivalent of arranging 501 coins by hand, and that is what happens with programs that are often presented as demonstrating how such FSCI can arise by blind chance and mechanical necessity. Nope, as was discussed at length over the period a few months back, GAs START in a defined target zone, with a neatly arranged nice trendy fitness function, and then do some Mt Improbable style hill-climbing to peaks of the function. But, that has begged the question of how you arrived to begin with in such a convenient location, i.e on an island of function. THAT challenge is what [d]FSCI is about. And the answer is the ne we know for all observed GA's to date: the key info was built in by the designers. That is, GA's show the power of design. As the needle in the haystack issue will point out. BTW, the digital material in the heart of the living cell, DNA, starts out at about 100,000 - 1 mn bits for simplest actually observed life, and goes up to the billions for the more complex body plans. (If you want to hypothesise about a run-up to such life, please show us empirical cases of the spontaneous emergence of metabolising, self-replicating systems without undue experimenter intervention, from reasonable pre-life environments. We know from the unintended experiment of the caning industry, that spontaneous emergence of life in even quite rich prebiotic soups with conveniently homochiral environments is rather small. With many billions of test cases in point. That is, no-one has reported spontaneous emergence of novel life in such a can, after coming on 200 years of canning. And realistic prebiotic environments cannot assume homochirality or that degree of concentration, both of which have exponential effects on making the reactions much more likely.) So, dFSCI is not a suspect concept or metric expression. Just, it gives a message that is not very welcome to the institutionally dominant school of thought on origin of life or of body plans. But then, 200 years ago or so, Wilberforce was a spokesman for a controversial and tiny minority. GEM of TKI kairosfocus
KF: Thank you for the very important clarification :) gpuccio
GinoB: a) Please explain the difference between abstract codes and, I suppose, "concrete codes" (?) b) Unwarranted and not pertinent. I was speaking of information in general. If you define information, it is very easy to apply that definition to biological information. No new definition is necessary. c) I have never said that information has "immaterial properties". But it is certainly an abstract concept, not definable in purely material terms (if we can give a meanign to "material", a word that I usually avoid to use). d) I said "The information in DNA is not digital". That was the denial. That DNA is not digital is obvious and trivial. It has however benn denied too, recently, by Elizabeth, and I have very explicitly stated on that occasion that molecules are not digital, and that normal molecules contain no digitally coded information, while DNA does. e) I have many times rigorously defined all those things. gpuccio
KF: Definitely deliberate gobbledygook :). But in the end they make more sense than many darwinist "arguments". (By the way, they were generated by randomly typing on my keyboard, after having conscientiously tried to get in the state of mind of a monkey) :) . gpuccio
GinoB: Well, just to explain why I say that you are just "looking for a fight": a) Have you read my many detailed posts about dFSCI? It seems not, from what you write. So, why are you jumping to the conclusion that it is a "made up, subjective metric"? b)You say: at best your claim “only intelligently designed things can have large amounts of dFSCI” is a hypothesis, not any sort of established truth. What you are expressing here is not the initial hypothesis, but the final inference. It is obvious that you don't know, or don't understand, the ID position. c) You say: To honestly test the hypothesis, you’re going to have to measure both known designed and known not-designed things. That's exactly what I explicitly do in my reasoning about dFSCI. I show that known designed objects, that is human artifacts, often exhibit dFSCI (very easy to demonstrate). And that no known not-designed object exhibits it (equally easy to demonstrate). From those empirical observations, and only from them, derives the concept that dFSCI is an empirical indicator of design. Then I use that indicator to formulate a design inference for biological information, that does exhibit dFSCI in great amounts. That is the correct epistemological sequence. d) You say: You can’t look at a whole class of unknown-origin objects (i.e biological life) and then conclude that they’re all designed based on the very thing you’re trying to test. Obviously. I have never done that. That kind of statement only demonstrates that you have never read my posts on the subject. or never understood them So, either you are cognitively superficial and arrogant, or you are just looking for a fight. QED. gpuccio
gpuccio
So, I believe they stick to some bizarre denials: a) DNA is not a code
Of course it's a code, meaning that it's a process where the inputs are mapped to the outputs. It's just not an abstract code. IDers love to play bait-and-switch equivocation games with the different definitions of code.
b) Information cannot be defined
No IDer has ever defined 'biological information' in any sort of rigorous or meaningful way.
c) Information does not exist
Of course information exists. It just doesn't have the immaterial properties you guys like to fantasize about.
d) The information in DNA is not digital
DNA is not digital.
e) Functional information is a sprtiehsgao f) dFSCI is a cbdkspfvorunm
Both 'functional information' and 'dFSCI' (and all the rest of the alphabet buzzterms you guys come up with) have never been rigorously defined. But that's OK. The scientific community knows keeping things fuzzy and vague is part of the IDer squid-ink escape strategy. If you never commit to definitions you can't be pinned down. GinoB
GP: are those Italian terms, or deliberate gobbledygook? kairosfocus
F/N: Above, Dr Bot raises the issue of Analogue info. A discussion of digital info is without loss of generality, as analogue info may be converted to digital, and by so doing, the tolerance range can be assessed -- indeed this is an island of function issue. Working with digital info is without loss of generality, WLOG. Take a car part for say an engine. It has certain specific requisites as to shape, materials, size etc. As we know from the modern world of digital drawings [cf. discussion here on in IOSE], such can be reduced to a mesh of nodes and arcs, in some structured order. To do so, in effect there is a structured chain of yes/no decisions, of some length. Such a chain is of course a measure of info in bits. The island of function issue emerges from seeing that his structure has a tolerance range [T], within which various actual configs [E] will be adequately acceptable. All of these sit in the wider space of all possibilities for something of that degree of complexity [W], where of course for a particular context, by far, most of W will be non-functional. A vinyl record is a good example in point, as is a bar of cams used to control an automaton. Both of these are analogue programs, and can WLOG be converted to equivalent digital ones. Modern info theory was developed in a largely analogue comms world, but reduced info to bits using a probability metric I = - log p. In effect, the easiest way to see how that works is to carry out a notional or actual analogue to digital conversion process, with some degree of tolerance for the acceptable function. Discussion in digital terms is WLOG. GEM of TKI kairosfocus
gpuccio
I think I will not answer you any more. There is no hope, when the attitude is to serach only senseless fight.
No one's looking for a fight. I'm just pointing out the big problems with your arguments, ones that you obviously have no answers for. Let's assume for a second that that your made up, subjective 'dFSCI' metric has some validity. You still have the issue that at best your claim "only intelligently designed things can have large amounts of dFSCI" is a hypothesis, not any sort of established truth. To honestly test the hypothesis, you're going to have to measure both known designed and known not-designed things. You can't look at a whole class of unknown-origin objects (i.e biological life) and then conclude that they're all designed based on the very thing you're trying to test. It's called "affirming the consequence", and it's horribly bad reasoning. There's a reason the scientific community doesn't take such fatally flawed arguments seriously. Hint - it's not because of an evil conspiracy to EXPEL you. GinoB
material.infantacy: Of course it is digital. The fuss is easy to understand. At some point our darwinist interlocutors, or at least some of them, seem to become more or less aware that some of the things ID says are worrying. Maybe only in their subconscious. And, obviously, thay have no good arguments to answer those things. Not their fault. There are not good arguments. So, I believe they stick to some bizarre denials: a) DNA is not a code b) Information cannot be defined c) Information does not exist d) The information in DNA is not digital e) Functional information is a sprtiehsgao f) dFSCI is a cbdkspfvorunm And so on... gpuccio
GinoB: I think I will not answer you any more. There is no hope, when the attitude is to serach only senseless fight. Please review your epistemology, and think about the difference between a logic deduction and an empirical inference. And, if you want, look for the many occasions where I have detailed, supported and motivated all those assertions here. Again, have a good time (sincerely :) ). gpuccio
gpuccio
Because dFSCI is found empirically only in designed things. Because biological information has tons of dFSCI. Because you cannot explain that dFSCI in biological information in any other way, and the design inference remains the best explanation, indeed the only one we have at present.
Sorry, but your first statement is a completely unsupported assertion. Indeed, your whole argument is so logically flawed it would put a philosophy freshman to shame. "Salmon are found empirically only in fresh water" The Pacific ocean has tons of salmon Therefore the Pacific ocean must be fresh water" ID "logic" at its finest. GinoB
Dear Liz: Could you give an example of an unsupported speculation that a Darwinist has presented in the name of science? I'm bewildered. I presented exactly what you have requested in my opening comments. Darwinism is essentially nothing but unsupported speculation in the name of science (except for such trivialities as antibiotic resistance in bacteria). By Darwinian speculation I'm referring to -- although this also refers to Darwinian speculation concerning ancestor-descendant relationships in the fossil record, which are impossible to establish -- the proposed creative power of random errors filtered by natural selection. If this thesis is true, every feature of every living thing that has ever existed must be explicable by this mechanism. Darwinists propose that there "must have been" a gradual slope on the backside of Mount Improbable, which explains how an inherently degenerative process (random errors) can produce the exact opposite. What we observe in biology is a highly sophisticated error-detection-and-repair algorithm, with associated machinery, that compensates for the destructive effects of random errors. If this is not evidence of design, nothing is. As always Liz, I must express my appreciation for your contributions here. You are a fine person (anyone who appreciates classical music can't be all bad!), even though you are wrong. :-) GilDodgen
Dr Bot, Pardon, but you are dancin' wrong but strong. That DNA strands are string data structures using four-state elements, physically implemented using informational polymers -- even as electronic circuts use refined grains of sand with artfully introduced impurities, should be plain. This is instantiation, not analogy; save to those who so desperately need an out that they want to pretend that by describing a digital system as "discrete" instead, they can then suggest that it's all an analogy and analogies are not deductive proofs. First, the matter is plainly instantiation, and second, analogy is the foundation stone of inductive argument, which is the only class of argument that gives us empirical knowledge. No, I am not playing at giving misleading impressions, I am going off what I have studied, used and taught for over 30 years: digital systems are discrete state systems, by definition, and discrete state systems are digital systems by definition. Discrete meaning that between neighbouring states there are no defined intermediate states. The example I usually have given is that one may not climb a ladder by standing between the rungs. BTW, when I have taught basic atomics, I have pointed out that there are no inter-atoms, i.e being a particular element is a discrete state system too. Nature has a digital side to it. So, please note that I have taken pains to highlight how the matter is not merely being discrete state, but functionally specific complex information, having worked out the simplified Chi metric expression: Chi_500 = I*S - 500, bits beyond the solar system threshold, where I is an info metric rooted in the old I = - log p expression, directly or indirectly, and S is a dummy variable for specificity. 501 coins tossed at random will not be in a specified state so S = 0, and by overwhelming odds will be near to a 50-50 distribution: Chi_500 = - 500. But if the coins are instead found to encode the ASCII code for a statement in English, S = 1, I = 501 bits, Chi_500 = 1. One will be well warranted to conclude the best explanation is the coins were set in that array by intelligence. DNA is full of FSCI and we are well warranted -- despite all sorts of objections like the above -- to infer to design. GEM of TKI kairosfocus
So the measurable amount of difference in information between a live human and a dead human is about 21 grams? paragwinn
Could you give an example of an unsupported speculation that a Darwinist has presented in the name of science? You mean like the birds from dinos propaganda? [url]http://www.sciencedaily.com/releases/2009/06/090609092055.htm[/url] Blue_Savannah
Base 2: 0, 1 Base 4: 0, 1, 2, 3 Base 4: A, T, C, G 2^6 = 64 4^3 = 64 2^6 = 4^3 = (2^2)^3 = 2^(2*3) = 2^6 A = 0 = 00 T = 1 = 01 C = 2 = 10 G = 3 = 11 " " = 0 a = 1 b = 2 c = 3 d = 4 e = 5 f = 6 g = 7 h = 8 i = 9 j = 10 k = 11 l = 12 m = 13 n = 14 o = 15 p = 16 q = 17 r = 18 s = 19 t = 20 u = 21 v = 22 w = 23 x = 24 y = 25 z = 26 decimal: "digitally encoded" decimal: "4 9 7 9 20 1 12 12 25 0 5 14 3 15 4 5 4" binary: "100 1001 111 1001 10100 1 1100 1100 11001 0 101 1110 11 1111 100 101 100" quaternary: "10 21 13 21 110 1 30 30 121 0 11 32 3 33 10 11 10" quaternary: "TA CT TG CT TTA T GA GA TCT A TT GC G GG TA TT TA" It seems reasonably digital, as well as base 4. I don't understand the fuss. material.infantacy
BTW Dr Bot, pardon a basic reminder: digital MEANS discrete — as opposed to continuous
Yes, it does. So why can't the d in dFSCI refer to discrete? Probably because to the lay reader discrete does not imply design in the way digital does! If digital just means discrete then any molecular structure is digital because it is discrete. It is about arguments from analogy KF. DrBot
Onlookers, for a 101, cf here. kairosfocus
BTW Dr Bot, pardon a basic reminder: digital MEANS discrete -- as opposed to continuous -- state. That's lesson 1 in digital technology. (I used to teach that one by contrasting rungs on a ladder with climbing a rope. That too is why I pointed out various bases for digital systems above, 12 hrs and 60 minutes on a clock, etc. And the physical substrates used to store or manipulate the info are secondary to its significance as discrete state info. Binary digits do not lose their informational significance by being stored or processed as transistor ckt voltage levels or magnetisation states or tones or phases or amplitudes moving along a phone line etc. And in von Neumann's kinematic replicator, short rods of different length were to be used as digital storage units.) kairosfocus
Kindly tell us the term we use to describe a discrete as opposed to continuous state system.
Discrete DrBot
Dr Liddle: We have a string structure, wherein the positions along the string take states commonly symbolised by G/C/A/T (or for RNA, U). There is no defined state between any pair of these possibilities. Discrete state, in a string data structure, and known to hold coded prescriptive information. If your side is reduced to pleading "analogy" or to deny the definition of digital, then that is quite telling. And, kindly recall, the inference to design -- as has been pointed out over and over -- is not on complexity, but complexity with specificity, in this case by specific code based function. In a fairly direct comparison, let us suppose we were to impose a six state code on the system of dice, with the letters of he alphabet represented therein. If we saw a couple of hundred dice in a string, reading in no particular order the sequence would be best explained on randomness. But if we saw another in which the letters were spelling our an intelligible message in English -- note the specification here -- the best explanation would be intelligence. For very obvious reasons. In the cell, we have 4-state strings, with the elements arranged to carry out highly specific functions based on algorithmic step by step sequences, e.g. protein sequencing. Why should we refuse to accept the best explanation for the latter as design? GEM of TKI kairosfocus
Dr Bot: Kindly tell us the term we use to describe a discrete as opposed to continuous state system. Instantiation is not analogy. GEM of TKI kairosfocus
Elizabeth: You ask why???? I was just restating the fundamental consclusion of ID theory. Do you want me to explain it all again from scratch? Because dFSCI is found empirically only in designed things. Because biological information has tons of dFSCI. Because you cannot explain that dFSCI in biological information in any other way, and the design inference remains the best explanation, indeed the only one we have at present. Because the only other model explicitly proposed, neodarwinism, completely fails to explain what it pretends to explain. gpuccio
Elizabeth: But what in the world are you saying now? Sure, it’s a polynucleotide with four possible base pairs, bu that doesn’t make it “digital”. But it's not the DNA molecule that is digital. It's the information encoded in the gene! Codons: three nucleotides encode for one aminoacid. A base four code, redundant, and including stop codons. That is digital. How can you misunderstand that simple concept? What has that to do with molecules being "digital"? Molecules are not symbolic information. They do not bear the coded information for the structure of other molecules. The DNA code is a code. All biologists seem to understand that. Is it so difficult to agree on such elementary issues? gpuccio
Any encoding of functional information, if the information is complex enough, is an indicator of design.
Why? Elizabeth Liddle
DrBot: Any encoding of functional information, if the information is complex enough, is an indicator of design. It does not matter if the information is digital or analogic, discrete or continuous. In my diuscussions about dFSCI I have always specified that I choose a specific subset of FSCI, the digital (d), only because: a) The computations and models are easier b) The biologic information we are debating, that in my case is protein gene information, is certainly digital, and coded in a base four code. Therefore, there is no loss of generality in the discussion. gpuccio
Exactly. As I keep saying, there's not much that's digital about DNA. Sure, it's a polynucleotide with four possible base pairs, bu that doesn't make it "digital". And even if it did, it wouldn't make it designed. All molecules are "digital" in that they are, as you say, made of discrete entities, atoms, or monomers, or ions. Elizabeth Liddle
A raft of what? Elizabeth Liddle
It is also plainly digitally coded. (Binary digital is just the most headlined, we commonly count with decimal digital notation, we tell time with duodecimal and sexagesimal digital systems, hexadecimal digital systems are often used in machine language work with controllers or computers, etc etc. Even our alphanumeric symbol system used for computer based type is a digital system. So is music notation.)
Why is this anything more than an argument by analogy? What other system could be used to encode heritable information at this scale that could not be regarded as discrete - that is the proper word you should be using - DNA is a discrete encoding, not a continuous one. If a continuous encoding was present would that count against ID, or would the argument by analogy just switch to magnetic tape and vinyl? DrBot
GB: Pardon, but this is simply a strident way of saying, I don't like the facts in evidence. Let's review a few key points relative to my description, based on the past 60 or so years of molecular biology: 1 --> DNA is at the heart of cell based life. 2 --> DNA uses a 4-state, discrete state, string data structure with various specific codes. 3 --> The codes for making proteins in particular have in them: START (and put in Meth), elongate with type-x AA, elongate . . . , STOP codons. 4 --> This is functionally specific, prescriptive information. 5 --> It is also plainly digitally coded. (Binary digital is just the most headlined, we commonly count with decimal digital notation, we tell time with duodecimal and sexagesimal digital systems, hexadecimal digital systems are often used in machine language work with controllers or computers, etc etc. Even our alphanumeric symbol system used for computer based type is a digital system. So is music notation.) So, the attempt at dismissal backfires. I DESCRIBED the facts. Now, let's ask the proverbial astute onlooker why you are so patently uncomfortable with the information age facts that sit in the heart of the living cell. GEM of TKI kairosfocus
kairosfocus
And, as a pre- information age theory in an information age in which it has been discovered that digital coded, functionally specific prescriptive information is at the heart of cell based life, that fall is coming.
Cool! Yet another variation of the undefined meaningless buzzterm! "digital coded, functionally specific prescriptive information" dCFSPI! These multiple acronyms remind me of a wanna-be pretentious restaurant whose menu says "free range organically grown pesticide free hand selected and gently allowed to expire in a coop with soothing music poultry" ...so it can charge you $45 for a single piece of fried chicken. GinoB
Try the recent post & exchanges here for a raft of them. kairosfocus
Gil: The irony is, that the proper foundations of scientific warrant lie in philosophy. Specifically, in the epistemology of inference to best explanation as applied to scientific issues:
Science – “knowledge” in Latin – is today’s dominant contender for the title: “provider of reliable (or at least probable and credible) knowledge,” and it has a great inherent plausibility because Scientific methods are often glorified common sense: sophisticated extensions to how we learn from day to day experience. But, while such methods and their findings have a proven track record of success that has positively transformed our world, there are in fact many limitations to scientific knowledge claims. A little deeper glance at Charles Sanders Peirce’s Logic of Abduction (also cf. here and here or even here) concept rapidly shows why:
1. Observations of the natural (or human) world produce facts, F1, F2, . . . Fn; some of which may seem strange, contradictory or puzzling. 2. However, if a proposed law, model or theory, E, is assumed, the facts follow as a matter of course: E is a scientific explanation of F1, F2, . . . Fn. [This step is ABDUCTION. E explains the facts, and the facts provide empirical support for E. In general, though, many E's are possible for a given situation. So, we then use pruning rules, e.g. Occam's Razor: prefer the simplest hypothesis consistent with the material facts. But in the end, the goal/value is that we should aim to select/infer the best (current) explanation, by using comparative tests derived from the three key worldview tests: explanatory scope, coherence and power.] 3. E may also predict further (sometimes surprising) observations, P1, P2, . . . Pm. This would be done through deducing implications for as yet unobserved situations. [This step, obviously, uses logical DEDUCTION.] 4. If these predictions are tested and are in fact observed, E is confirmed, and may eventually be accepted by the Scientific community as a generally applicable law or theory. [This step is one of logical INDUCTION, inferring from particular instances to -- in the typical case, more general -- conclusions that the instances make “more probable.”] 5. In many cases, some longstanding or newly discovered observations may defy explanation, and sometimes this triggers a crisis that may lead to a scientific revolution; similar to Thomas Kuhn’s paradigm shift. 6. Thus, scientific knowledge claims are in principle always provisional: subject to correction/change in light of new evidence and analysis. 7. But also, even when observations are accurately covered/predicted by the explanation, the logic involved has limitations: E => O, the set of current and predicted observations[2], does not entail that if O is seen then E follows: “If Tom is a cat then Tom is an animal” does not entail “Tom is an animal, so he must be a cat.”[3]
In short, scientific knowledge claims, at best, are provisional; though they are usually pretty well tested and have across time helped us make considerable technological, health and economic progress.
As I have recently clipped from Newton in his Opticks, Query 31 (1704) he knew this instinctively and intuitively, 300 years ago:
As in Mathematicks, so in Natural Philosophy, the Investigation of difficult Things by the Method of Analysis, ought ever to precede the Method of Composition. This Analysis consists in making Experiments and Observations, and in drawing general Conclusions from them by Induction, and admitting of no Objections against the Conclusions, but such as are taken from Experiments, or other certain Truths. For Hypotheses are not to be regarded in experimental Philosophy. And although the arguing from Experiments and Observations by Induction be no Demonstration of general Conclusions; yet it is the best way of arguing which the Nature of Things admits of, and may be looked upon as so much the stronger, by how much the Induction is more general. And if no Exception occur from Phaenomena, the Conclusion may be pronounced generally. But if at any time afterwards any Exception shall occur from Experiments, it may then begin to be pronounced with such Exceptions as occur. By this way of Analysis we may proceed from Compounds to Ingredients, and from Motions to the Forces producing them; and in general, from Effects to their Causes, and from particular Causes to more general ones, till the Argument end in the most general. This is the Method of Analysis: And the Synthesis consists in assuming the Causes discover'd, and establish'd as Principles, and by them explaining the Phaenomena proceeding from them, and proving the Explanations.
Where too many of Darwin's champions go off the rails, is they confuse an explanatory model of the deep, unobserved past with a "fact" -- even trying to compare [macro-]evolution favourably to gravity or gravitation. (It has been aptly highlighted that a better comparison would be theories of the origin rather than the current operation of the solar system. Actually, there are no such generally accepted Theories, just models with one degree or other of difficulties. The raft of exo-planets being discovered in recent years, also appear to be making such models ever more difficult; e.g. the latest idea that maybe there was a fifth gas giant that happily manged to stabilise the rest of the system when it got kicked out. Unobserved and suggested as a way to make some difficulties less intractable, but it is all acknowledged that this is not unquestionable fact, just modelling of the suggested deep past of origins for our system.) Secondly, since Darwinian theory has become the origins myth of today's quasi-religion, "scientific" materialism, it has been embedded with an aura of invincibility that will make the fall thereof a terrible sight to see. And, as a pre- information age theory in an information age in which it has been discovered that digital coded, functionally specific prescriptive information is at the heart of cell based life, that fall is coming. GEM of TKI kairosfocus
When I was in a psychology lecture once, we were each given a questionnaire. On it were a series of statements for which we had to guess "True or False?" Against each statement was an argument indicating why the statement was obviously true or obviously false. One I remember was: "After a natural disaster, the situation is worsened by the fact that people are less capable than usual of organising themselves. This is not surprising, as natural disasters are deeply traumatic events". At the end the lecturer asked us to say what percentage we'd answered "True". Most of us said "80%". Then he revealed that while we all had the same set of statements, the "obvious" arguments were different in one half than in the other. And we'd all been suckered by the argument that the answer was "obvious". It was a very salutary experience. What seems obvious ain't necessarily so. (BTW, the statement above turns out to be false. People are usually very good at organising themselves after natural disasters. After all, they are well-motivated to do so :)) Elizabeth Liddle
I find this Darwinism stuff to be a desperate attempt to deny the obvious
But the truth often isn't "obvious". That's why the scientific method has been so successful: It gives us a means of determining the truth without relying on what seems "obvious" or on "common sense". NormO
But Darwinists are unwilling to acknowledge their ignorance concerning how this all came about, and persist in presenting unsupported speculation in the name of science.
Could you give an example of an unsupported speculation that a Darwinist has presented in the name of science? And while I agree that we should not say "science has discovered...", that applies just as much to common descent, or the speed of light, or the theory of gravity, or even the germ theory of disease. All scientific conclusions are provisional. That doesn't mean that we can't, for practical purposes, assume they are pretty much true.
Based upon what I’ve learned over my 60 years of existence — mathematics, chemistry, physics, music and language study, computer programming, AI research, and involvement in multiple engineering disciplines — I find this Darwinism stuff to be a desperate attempt to deny the obvious: design and purpose in the universe and human existence.
Well, you are a year or two ahead of me, Gil (60 next birthday!), but I can muster a comparable learning history: music, psychology, design, structural engineering, computer programming, computational modelling and neuroscience - and I find "this Darwinism stuff" to be a pretty compelling fit of model to data, and in no way do I find it "denies the obvious" - clearly many living organisms have plenty of purpose, including humans, and for some of us, that purpose includes making music :) But science remains always provisional. Not only are scientists willing "to acknowledge their ignorance", that acknowledgement is intrinsic to the domain and its methodology. And at a practical level, scientists literally thrive on what we don't know - how else would we persuade anyone to fund a research project? Elizabeth Liddle

Leave a Reply