Uncommon Descent Serving The Intelligent Design Community

Darwinists are Delegitimizing Science in the Name of Science

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

What Darwinists don’t recognize is that, in the name of promoting science, they are actually promoting skepticism about what can be trusted in the name of science.

Bears evolved into whales? No, that’s been rejected. “Scientists” suggest that whales might have evolved from a cat-like animal, or a hyena-like animal, or (fill in the blank).

“It is thought by some that…”

This is “science”?

Evolution is a fact, if evolution is defined as the observation that some living systems are not now as they once were. According to this definition I count myself as an evolutionist.

But Darwinists are unwilling to acknowledge their ignorance concerning how this all came about, and persist in presenting unsupported speculation in the name of science.

This is ultimately destructive of the scientific enterprise. When people read such things as “science has discovered…” or “scientific consensus assures us that…” or some such, people are likely to assume that they are being conned, even if this is not the case, because they have been burned by so many claims in the past that turned out to be transparently false or eventually invalidated by evidence.

Based upon what I’ve learned over my 60 years of existence — mathematics, chemistry, physics, music and language study, computer programming, AI research, and involvement in multiple engineering disciplines — I find this Darwinism stuff to be a desperate attempt to deny the obvious: design and purpose in the universe and human existence.

The irony is that Darwinists are doing much harm to that which they presume to promote — confidence in claims made in the name of science.

Comments
The Digital Code of DNA - 2003 - Leroy Hood & David Galas Excerpt: The discovery of the structure of DNA transformed biology profoundly, catalysing the sequencing of the human genome and engendering a new view of biology as an information science. http://www.nature.com/nature/journal/v421/n6921/full/nature01410.htmlbornagain77
October 27, 2011
October
10
Oct
27
27
2011
06:31 AM
6
06
31
AM
PDT
gpuccio
The genetic code is not base 4? Protein coding genes code for proteins, you know. The information is coded in base 4. Why do you deny that simple concept?
No. The human designed system to represent the complicated chemical reactions of life is base 4. Digitizing an analog signal to record it doesn't magically make the original signal digital. Pretty much everyone on the planet understands that the map is not the territory, except ID supporters is seems.GinoB
October 27, 2011
October
10
Oct
27
27
2011
06:24 AM
6
06
24
AM
PDT
A search that is not intelligently directed is by definition blind.
In that case your argument is completely circular.
...the issue is not to adapt an existing solution to something close by in config space, but to get to the original solution, i.e to the shores of the island of function.
Why is "the issue...not to adapt an existing solution to something close in config space"? It seems to me that's exactly what the issue is. What do you mean by "the original solution"? If you mean the simplest possible self-replicator, then, sure, but clearly Darwinian evolution is not invoked to account for the necessary conditions for Darwinian evolution! But once started, Darwinian evolution is a perfectly good method for finding novel solutions, which is why we actually use them.Elizabeth Liddle
October 27, 2011
October
10
Oct
27
27
2011
05:21 AM
5
05
21
AM
PDT
ES: Prezactly.kairosfocus
October 27, 2011
October
10
Oct
27
27
2011
05:13 AM
5
05
13
AM
PDT
Dr Liddle: A search that is not intelligently directed is by definition blind. And the point where you speak of searching within an island of existing function, is exactly the key point highlighted by design theory, the issue is not to adapt an existing solution to something close by in config space, but to get to the original solution, i.e to the shores of the island of function. Until you find initial function, if you are not intelligently configuring elements towards intelligently identified forms and arrangements that will credibly work, you are forced to undertake blind search without feedbalc across the vast majority of the config space. This starts first of all in the warm little pond or equivalent, and pardon but until one shows an OBSERVED path to metabolism plus symbolic replication, then one has nothing. The tree of life icon that so rules the minds of many, has no root. Then, when one has first got some function, one needs to show how incremental stepwise changes, that must all be functional, can move from initial body plan to more complex ones with specialised organs etc. And, this has to be embryologically feasible, where there are multiple complex components that must all fit and work together in collaboration the FIRST time, or the body plan fails to develop from the initial zygote or equivalent. That characteristic wiring diagram integrated complexity is the direct reason to see islands and archipelagos of function in large seas of non-functional configs. Something like the bird wing, feathers and controls plus power systems plus specialised lungs highlighted by co-founder of modern evolutionary theorising, Wallace, in support of his intelligent evolution view, is a good illustrative example. the co-adaptations to change a bear or a cat or a hippo etc into a whale, are similar. The eyes are a similar case. And more. Neither root nor main branches of the so often portrayed tree of life are to be seen in the fossil record, nor the lab today. The evidence points strongly to islands of function, regardless of the demand of the theory established by a priori materialism, that there MUST be smoothly graded pathways form the root to the branching body plans. (Notice the issue of the trade secret of paleontology.) Those islands, from the dna we observe, require about 100,000 - 1 mn bits of info for the first cell plan, and onward 10 - 100 million plus for the main body plans for multicellular organisms. That there is a smoothly graded path from unicellular to multicellular organisms on the various body plans, or that the blind -- non-intelligent -- searches of relevant config spaces required to effect this can be done without intelligence (the ONLY observed source of FSCI) is an a priori demand of a theory accepted as the implication of a worldview, not something that has been warranted empirically. That is why Johnson in reply to Lewontin et al is so stinging:
For scientific materialists the materialism comes first; the science comes thereafter. [[Emphasis original] We might more accurately term them "materialists employing science." And if materialism is true, then some materialistic theory of evolution has to be true simply as a matter of logical deduction, regardless of the evidence. That theory will necessarily be at least roughly like neo-Darwinism, in that it will have to involve some combination of random changes and law-like processes capable of producing complicated organisms that (in Dawkins’ words) "give the appearance of having been designed for a purpose." . . . . The debate about creation and evolution is not deadlocked . . . Biblical literalism is not the issue. The issue is whether materialism and rationality are the same thing. Darwinism is based on an a priori commitment to materialism, not on a philosophically neutral assessment of the evidence. Separate the philosophy from the science, and the proud tower collapses. [[Emphasis added.] [[The Unraveling of Scientific Materialism, First Things, 77 (Nov. 1997), pp. 22 – 25.]
Pardon, finally, but this sort of over and over again definitionitis game gets old very fast. GEM of TKIkairosfocus
October 27, 2011
October
10
Oct
27
27
2011
05:11 AM
5
05
11
AM
PDT
Comments like 7.1.1.1.2 about blind search being good stem from lack of understanding of intractability of life and of biofunction not being defined over large areas in configuration spaces. Blind 'trial and error' search is no good for anything of real engineering importance.Eugene S
October 27, 2011
October
10
Oct
27
27
2011
05:05 AM
5
05
05
AM
PDT
Petrushka: Why should it be necessary to move away from "the materials and forces of nature" that are being used to implement the digital system, to show that it is a digital system? ALL digital systems -- all engineered systems -- are implemented using "the materials and forces of nature." ABET:
ENGINEERING is the profession in which a knowledge of the mathematical and natural sciences gained by study, experience, and practice is applied with judgment to develop ways to utilize economically the materials and forces of nature for the benefit of mankind.
Why not work through the response to Dr Liddle just above, and in particular focus on the way that tRNA, the crucial specifying element in the AA chaining system, works? Notice, the standard CCA coupler on the end opposite to the aticodon? If you object to the use of key-lock fitting and bumps/drops in a digital code [codon-anticodon fit], please observe that Braille, used with blind people, is exactly a 6-bit digital code based on bump/no bump in a physically organised array in a physical medium such as paper. And, how ha4rd this is to do, joined to how fruitless a trial and error search approach would be, is a clue to how elegantly knowledgeable and skilled the design is. Do you think that illiterate primitives would be impressed if they were to come across a motherboard, maybe with the chip covering blown off, and see the square of silicon and the plastic, wires and solder? Surely, such a complex thing is very hard to do, and so how could someone do it apart from trial and error that would most likely fail? Of course the answer is trillions invested across centuries to develop and build up then propagate the science and create the cluster of required industries. Now, move this all to the next level where we are dealing with sophisticated molecular nanotech. Does that help you see more clearly? GEM of TKIkairosfocus
October 27, 2011
October
10
Oct
27
27
2011
04:52 AM
4
04
52
AM
PDT
Blind search is far less powerful than heuristically (intelligently) guided search. You can think of blind search as of a heuristic search with a very poor heuristic (something like a zero knowledge default). I am not saying that in all lab experiments they must be using heuristic guidance but that is a possibility.
What do you mean by "blind search"? Evolution is not "blind search" in a very important sense - it "searches" where it has already "found" solutions. It does not "blindly" grope anywhere in the solution space at random. That's why it's such a good search method for fitness landscapes in which good solutions are clustered in neighbourhoods, as is the case in biology (and in many engineering contexts too).Elizabeth Liddle
October 27, 2011
October
10
Oct
27
27
2011
04:37 AM
4
04
37
AM
PDT
kf, of course I am not denying the genetic code! I'm saying that "base 4 digital" is extremely misleading description of that code. I don't care what wiki says - I have presented my reasoning. If you have problems with my reasoning, please articulate what the problem is. I also do not accept that the mapping of 64 triplet base-pair sequences to 20 amino acids is "SYMBOLIC" not "chemical". What's not chemical about it?Elizabeth Liddle
October 27, 2011
October
10
Oct
27
27
2011
04:31 AM
4
04
31
AM
PDT
Dr Liddle: That DNA is a physical implementation of digital, discrete state, base-4 coding is not in serious doubt. Let's clip Wiki speaking against interest:
The genetic code is the set of rules by which information encoded in genetic material (DNA or mRNA sequences) is translated into proteins (amino acid sequences) by living cells. The code defines how sequences of three nucleotides, called codons, specify which amino acid will be added next during protein synthesis. With some exceptions,[1] a three-nucleotide codon in a nucleic acid sequence specifies a single amino acid. Because the vast majority of genes are encoded with exactly the same code (see the RNA codon table), this particular code is often referred to as the canonical or standard genetic code, or simply the genetic code, though in fact there are many variant codes. For example, protein synthesis in human mitochondria relies on a genetic code that differs from the standard genetic code. Not all genetic information is stored using the genetic code. All organisms' DNA contains regulatory sequences, intergenic segments, chromosomal structural areas, and other non-coding DNA that can contribute greatly to phenotype. Those elements operate under sets of rules that are distinct from the codon-to-amino acid paradigm underlying the genetic code . . . . After the structure of DNA was discovered by James Watson and Francis Crick, who used the experimental evidence of Maurice Wilkins and Rosalind Franklin (among others), serious efforts to understand the nature of the encoding of proteins began. George Gamow [--> the Russian-American Astronomer] postulated that a three-letter code must be employed to encode the 20 standard amino acids used by living cells to encode proteins. With four different nucleotides, a code of 2 nucleotides could only code for a maximum of 4^2 or 16 amino acids. A code of 3 nucleotides could code for a maximum of 4^3 or 64 amino acids.[2] The fact that codons consist of three DNA bases was first demonstrated in the Crick, Brenner et al. experiment. The first elucidation of a codon was done by Marshall Nirenberg and Heinrich J. Matthaei in 1961 at the National Institutes of Health. They used a cell-free system to translate a poly-uracil RNA sequence (i.e., UUUUU...) and discovered that the polypeptide that they had synthesized consisted of only the amino acid phenylalanine. They thereby deduced that the codon UUU specified the amino acid phenylalanine. This was followed by experiments in the laboratory of Severo Ochoa demonstrating that the poly-adenine RNA sequence (AAAAA...) coded for the polypeptide poly-lysine[3] and that the poly-cytosine RNA sequence (CCCCC...) coded for the polypeptide poly-proline.[4] Therefore the codon AAA specified the amino acid lysine, and the codon CCC specified the amino acid proline. Using different copolymers most of the remaining codons were then determined. Extending this work, Nirenberg and Philip Leder revealed the triplet nature of the genetic code and allowed the codons of the standard genetic code to be deciphered. In these experiments, various combinations of mRNA were passed through a filter that contained ribosomes, the components of cells that translate RNA into protein. Unique triplets promoted the binding of specific tRNAs to the ribosome. Leder and Nirenberg were able to determine the sequences of 54 out of 64 codons in their experiments.[5] Subsequent work by Har Gobind Khorana identified the rest of the genetic code. Shortly after, Robert W. Holley determined the structure of transfer RNA (tRNA), the adapter molecule that facilitates the process of translating RNA into protein. This work was based upon earlier studies by Severo Ochoa, who received the Nobel prize in 1959 for his work on the enzymology of RNA synthesis.[6] In 1968, Khorana, Holley and Nirenberg received the Nobel Prize in Physiology or Medicine for their work.[7] . . . . The genome of an organism is inscribed in DNA, or, in the case of some viruses, RNA. The portion of the genome that codes for a protein or an RNA is called a gene. Those genes that code for proteins are composed of tri-nucleotide units called codons, each coding for a single amino acid. Each nucleotide sub-unit consists of a phosphate, a deoxyribose sugar [--> the sugar-phosphate chaining backbone], and one of the four nitrogenous nucleobases [--> the info storing "side-branch"] . . . . Each protein-coding gene is transcribed into a molecule of the related polymer RNA. In prokaryotes, this RNA functions as messenger RNA or mRNA; in eukaryotes, the transcript needs to be processed to produce a mature mRNA. The mRNA is, in turn, translated on the ribosome into an amino acid chain or polypeptide.[8]:Chp 12 The process of translation requires transfer RNAs specific for individual amino acids with the amino acids covalently attached to them, guanosine triphosphate as an energy source, and a number of translation factors. tRNAs have anticodons complementary to the codons in mRNA and can be "charged" covalently with amino acids at their 3' terminal CCA ends. Individual tRNAs are charged with specific amino acids by enzymes known as aminoacyl tRNA synthetases, which have high specificity for both their cognate amino acids and tRNAs. The high specificity of these enzymes is a major reason why the fidelity of protein translation is maintained.[8]:464–469 There are 4^³ = 64 different codon combinations possible with a triplet codon of three nucleotides; all 64 codons are assigned for either amino acids or stop signals during translation. If, for example, an RNA sequence UUUAAACCC is considered and the reading frame starts with the first U (by convention, 5' to 3'), there are three codons, namely, UUU, AAA, and CCC, each of which specifies one amino acid. This RNA sequence will be translated into an amino acid sequence, three amino acids long.[8]:521–539 A given amino acid may be encoded by between one and six different codon sequences. A comparison may be made with computer science, where the codon is similar to a word, which is the standard "chunk" for handling data (like one amino acid of a protein), and a nucleotide is similar to a bit, in that it is the smallest unit . . .
Notice, this work was rewarded with a Noble prize over forty years ago. That is how long ago, it was not only no longer a matter of significant dispute, but a celebrated achievement of science, that DNA was understood to be an informational macromolecule, carrying coded information. Observe, how it was PREDICTED that, to code for 20 AA's, you would need a triplet coding scheme, as 4^3 = 64, whilst 4^2 = 16. DNA as a code based linear macromolecule acting as a physical basis for a string data structure with 4-state digital elements, is not a matter of serious dispute. What is therefore significant is why there are at UD those who so hotly dispute this long since well documented and commonly accepted reality. It cannot be for want of familiarity with basic facts, as these are easily accessible and have been taught in schools from grade or secondary level for decades, as well as being all over the media and Internet. The answer, to be direct, is plainly ideological, as the strong, sharply reactive objection to easily confirmed terms like "digital" and "code" -- notice Dr Bot's preference above to substitute the less familiar term, "discrete" -- reflect. Digital MEANS discrete state, as Wiki also conveniently documents [I confess, I here feel like one having to "prove" what "ABC . . ." means to a literate person]:
A digital system[1] is a data technology that uses discrete (discontinuous) values. By contrast, analog (non-digital) systems use a continuous range of values to represent information. Although digital representations are discrete, they can be used to carry either discrete information, such as numbers, letters or other individual symbols, or approximations of continuous information, such as sounds, images, and other measurements of continuous systems. The word digital comes from the same source as the word digit and digitus (the Latin word for finger), as fingers are used for discrete counting. It is most commonly used in computing and electronics, especially where real-world information is converted to a digital format as in digital audio and digital photography . . .
Similarly, Wiki tells us that:
A code is a rule for converting a piece of information (for example, a letter, word, phrase, or gesture) into another form or representation (one sign into another sign), not necessarily of the same type. In communications and information processing, encoding is the process by which information from a source is converted into symbols to be communicated. Decoding is the reverse process, converting these code symbols back into information understandable by a receiver. One reason for coding is to enable communication in places where ordinary spoken or written language is difficult or impossible. For example, semaphore, where the configuration of flags held signaller or the arms of a semaphore tower encodes parts of the message, typically individual letters and numbers. Another person standing a great distance away can interpret the flags and reproduce the words sent.
if it were not so sad, I would be amused by the objeciton in your last paragraph:
please do not belittle the perfectly good argument that DNA is not “digital base 4? in any useful sense. My view is that it’s a useless model, because the bases are not switched. The system is alphabetic not digital (which should be equally “worrying” to me, on your logic, but isn’t, and therefore undermines the argument that we are running scared from the implications of “digital base 4?.)
The alphanumeric system used to communicate in Written English, FYI is precisely a digital system of discrete glyphs, one that is then translated into one of several common binary digital codes, e.g. the 7- or 8-bit [parity check] ASCII code. The chain of symbols in DNA, FYFI, can be reframed to code for different proteins, and this has apparently been observed. That is, the symbols may function differently by changing the framing -- a high art of machine code design that I have never even TRIED to do, as I came along in the days when we had big enough EPROMs. Thank God for the good old 2716! The evidence is more than compelling, to all save those who are ideologically committed otherwise. DNA is an informational macromolecule used in the heart of the cell that physically instantiates a base-4, digital, discrete state string data structure containing prescriptive information, especially protein codes and regulatory information. As noted with reference to the charging of tRNAs, the system is also highly specific, where the COOH end of the AA is locked to a standard tool-tip on the tRNA, based on its configuration. (It is chemically possible to force a false charging of any tRNA because of that universal coupler system.) The AA-carrier tool tip and the codon-matching anticodon are at opposite ends of the tRNA, and so we see how the transfer from RNA to emerging protein -- a translation process that is also diagnostic of a code in action: this is mapping, from a 64 states system to a 20 states one [with some key exceptions] -- is SYMBOLIC, not a matter of blind chemical forces. And BTW, the system is subject to reprogramming, and I gather experimenters have recently reconfigured to code for different sequencings. So, the matter is plain. GEM of TKIkairosfocus
October 27, 2011
October
10
Oct
27
27
2011
04:25 AM
4
04
25
AM
PDT
Petrushka, I see the point and almost agree with you. The problem that you seem to be skipping over is that of initial conditions. I don't know any details about how they usually conduct search for biofunction in the lab. But here is what I think. 1. The experiments for reasons of intractability must necessarily start close enough to something functional. 2. Blind search is far less powerful than heuristically (intelligently) guided search. You can think of blind search as of a heuristic search with a very poor heuristic (something like a zero knowledge default). I am not saying that in all lab experiments they must be using heuristic guidance but that is a possibility. So it is not exactly evolution is the lab. Call it directed (micro)-evolution or intelligent parameter setting. The point is where we start searching for function. I am an engineer and what KF and GP are saying makes a lot of sense to me. While I have no problem in admitting evolution as a possibility in principle (for things like adaptation), inherent monumental intractability of life is a serious consideration againts neo-Darwinism on a grand scale.Eugene S
October 27, 2011
October
10
Oct
27
27
2011
04:25 AM
4
04
25
AM
PDT
No, gpuccio. Clearly hexadecimal numbers are digital, incorporate place value, and any digit can be switched between one of sixteen states. Polynucleotide base pairs sequences have no "place value", and "switching" a base pair to take a different state (if we call replacing it in a copy, "switching" it), is only one of many ways in which the sequence is modified - insertion, deletion and duplication are just as important. Moreoever, polynucleotide base-pair replacements are not part (that I know of) of healthy organismic function, but rather something that happens during reproduction. During the life of an organism, we hope that our DNA molecules stay pretty much the same as the one we started off with. They don't always, of course, which is why we get cancer. What does happen, however, during the life of an organism, indeed repeatedly I am typing this, is that genes are switched between "off" and "on" states. In that sense they are "digital", but binary, and still not a system with place-value. On the other hand DNA is quite like an alphabetic system in that it can be parsed into three part "letters" (each consisting of a base pair triplet) which in term form combinations that "spell" a specific protein, in something of the way that roman letters "spell" a specific word. However, even here the analogy breaks down, because whereas a DNA sequence, under certainly conditions, triggers a sequence of chemical processes that result in the synthesis of a physical object (the coded protein) the letter sequence "JUSTICE" or even "CONCRETE" merely evoke in a pair of people the same shared concept, and neither justice nor concrete are synthesised by the writing of the word. That is certainly not to say that information is not an important concept when considering genetics, but to say "heere bee Dragones" - or rather acute risk of inadvertent equivocation!Elizabeth Liddle
October 27, 2011
October
10
Oct
27
27
2011
04:24 AM
4
04
24
AM
PDT
material.infantacy: I suppose it is because, if they admit the concept (and indeed, many reserachers now do exactly that) they are in a mess with their theory. But the concept is strong, beautiful and simple. You define a function. You compute the minimun number of bits necessary to express that function. You evaluate if the random system you are considering could reasonably produce that result, alone. Or is there are explicit necessity algorithms that can do the same, either alone or associated to the random system. It is not so difficult. (Mind, all who read: this is not the rigorous definition, just a generic description).gpuccio
October 27, 2011
October
10
Oct
27
27
2011
04:05 AM
4
04
05
AM
PDT
Elizabeth: "The system is alphabetic not digital " !!!!! Are you serious? Then hexadecimal numbers are bastard?gpuccio
October 27, 2011
October
10
Oct
27
27
2011
03:52 AM
3
03
52
AM
PDT
Elizabeth: What do you mean? Sometimes I really can't understand what you mean!!! The genetic code is not base 4? Protein coding genes code for proteins, you know. The information is coded in base 4. Why do you deny that simple concept?gpuccio
October 27, 2011
October
10
Oct
27
27
2011
03:51 AM
3
03
51
AM
PDT
KF: You are definitely better than me at that! :)gpuccio
October 27, 2011
October
10
Oct
27
27
2011
03:48 AM
3
03
48
AM
PDT
Dr Liddle: Let's ask: have you ever done coding at machine or near machine level (i.e assembly)? Do you have practical knowledge of the difference between code adapted to abstract info processing process, and code adapted to the specifics of machine implementation? I do. Machine code is the term given for object code that is specifically adapted to the details of a given machine. Source code, by contrast is abstracted from those specifics. The code we see in DNA is precisely adapted to the details of the machine, and is an example of such. Cf the process of protein synthesis to see why I say that. So, no, I am not just making empty wishful assertions that are suspect and need to be proved. I am speaking as an experienced designer and programmer at assembly/machine levels. The idea of a DNA compiler, would be where there is an analogy if anything, as we have certainly found it VERY advantageous to use abstracted languages adapted to the needs of the problem. And, somewhere down the line we WILL develop a DNA compiler -- and a DNA decompiler to move from object code to some version of a source code. That BTW, is part of what I have in mind when I speak of how Venter has given us proof of concept of intelligent design of organisms, and that we need to move some number of generations down the road. Where, a molecular nanotech lab several generations beyond Venter could design and implement C-chemistry, cell based life, as a sufficient cause. GEM of TKIkairosfocus
October 27, 2011
October
10
Oct
27
27
2011
03:43 AM
3
03
43
AM
PDT
Nope. What mean is that sequences cannot be translated into folds except by doing the chemistry. One can emulate the chemistry (as in Folding@Home), but this is monumentally difficult and there appear to be no shortcuts. The problem for a designer is one of knowledge. How does a designer accumulate the knowledge know what sequences are functional, when the possible combinations of a single gene exceed the number of particles in the universe? Where is the knowledge stored? How is it accessed? Most engineering problems can be fixed by research and development. Rockets were invented at least a thousand years before they became practical for transportation. But the problem of cellular automata and (I think) the analogous problem of protein folding appear to be mathematically intractable. What human engineers do when experimenting with novel sequences is generate lots of sequences and select those that produce desirable folds. From those, a tiny subset may have some minimal function. So what is being done is evolution is the laboratory. The problem from a design standpoint is that nature has far greater resources for generating and testing novel sequences. Now either functional space is such that it can be traversed incrementally, or it isn't. If it isn't, then it is inaccessible to designers as well as to evolution.Petrushka
October 27, 2011
October
10
Oct
27
27
2011
03:30 AM
3
03
30
AM
PDT
Interpretations are bound to be contextual. What is meaningful in one context, may not be meaningful in others. Is that what you mean by "independently from the processes of chemistry"?Eugene S
October 27, 2011
October
10
Oct
27
27
2011
03:07 AM
3
03
07
AM
PDT
I have that impression as well. If I am not mistaken, Kauffman hypothesises e.g. that some sort of life was bound to emerge sooner of later in our universe. As far as I understand what he says, he believes it is possible via total co-evolution.Eugene S
October 27, 2011
October
10
Oct
27
27
2011
02:51 AM
2
02
51
AM
PDT
The code physically instantiated in DNA is machine code, object code.
Please support this assertion.Elizabeth Liddle
October 27, 2011
October
10
Oct
27
27
2011
01:57 AM
1
01
57
AM
PDT
At some point our darwinist interlocutors, or at least some of them, seem to become more or less aware that some of the things ID says are worrying.
No, gpuccio, this is not the case There is nothing intrinsically "worrying" about the idea that DNA is a "code" or that it is "digital". As I've said, in some respects DNA does act as a "digital" "code" only it is in binary not base 4 - genes are switched between "off" or "on" states. Nobody "worries" about this - the discovery was made by "darwinists" and forms a major plank in modern evolutionary biology, specifically the branch called "evo devo". So please do not belittle the perfectly good argument that DNA is not "digital base 4" in any useful sense. My view is that it's a useless model, because the bases are not switched. The system is alphabetic not digital (which should be equally "worrying" to me, on your logic, but isn't, and therefore undermines the argument that we are running scared from the implications of "digital base 4".)Elizabeth Liddle
October 27, 2011
October
10
Oct
27
27
2011
01:54 AM
1
01
54
AM
PDT
If this is not evidence of design, nothing is. This is not evidence of design. Ergo... !Chas D
October 27, 2011
October
10
Oct
27
27
2011
01:41 AM
1
01
41
AM
PDT
The main problem with design is not whether DNA is analog or digital, but whether, in principle, a DNA sequence can by interpreted independently from the processes of chemistry. I would like to see a design advocate demonstrate (even as a thought experiment) that one can anticipate the biological implementation of a coding or regulatory sequence without using trial and selection. I'm thinking some things in life resemble cellular automata in that one can only see the results of even a simple code except by running it.Petrushka
October 27, 2011
October
10
Oct
27
27
2011
01:06 AM
1
01
06
AM
PDT
HI GP, regarding your f), there seems to be an assumption on the part of some interlocutors that any contingent sequence in DNA is potentially functional, given the right combination of organism and environmental conditions. I could be wrong about this, but it would help explain why there is sometimes a denial (or doubt) that functionally specified information is an objective concept.material.infantacy
October 27, 2011
October
10
Oct
27
27
2011
12:59 AM
12
12
59
AM
PDT
"...that your made up, subjective ‘dFSCI’ metric..."
Do you mean that the acronym is made up? Aren’t they all?
'...only intelligently designed things can have large amounts of dFSCI” is a hypothesis,"
It’s an observation, excepting the subject at issue. Do you know of anything empirically established to be the product of necessity, which contains large strings of specified contingency? I trust you know the difference between specified and random contingency, both being uncompressible forms of information.
"...you’re going to have to measure both known designed and known not-designed things."
Computer code is specified and complex. Computer code is designed. Nothing in nature which is explicable by necessity contains specified complexity, or anything representing or analogous to computer code. This is obvious. The informational content of computer code can be assessed, just as the information content of the DNA molecule can. This is inarguable. The specified nature of the information content is inarguable. What’s at issue is whether there is an explicable mechanism born out of the laws of nature which can account for it. This is what’s at issue, not some idiotic objection to the use of terms like dFSCI. "Biology is the study of complicated things that give the appearance of having been designed for a purpose." [Dawkins] The Blind Watchmaker (1996) p.1 Evolution Quotes Francis Crick writes, "Biologists must constantly keep in mind that what they see was not designed, but rather evolved." Detecting Design in the Natural Sciences Now, we observe digitally coded information (dFSCI, or the digital subset of all examples of FUNCTIONALLY SPECIFIC COMPLEX INFORMATION) in both computer systems and in biological systems. If you don't like the dFSCI label, substitute "specified complexity" which has a history predating its use in ID; consider the subset of complex specified information that is digitally coded for function, and make up your own damn expression. The concept is real, so nobody cares if you take issue with the moniker. There’s no evidence you even understand what FSCI entails, no wonder you take issue wth dFSCI. It looks like you're new here. gpuccio is not - not by a stretch. You don't appear to even understand what he's talking about, you just shout naked incredulity at whatever he puts forward. Why don't you try addressing his arguments in a real discussion he has with EL, which begins here. That should establish if you have any grasp on what is actually being argued.material.infantacy
October 27, 2011
October
10
Oct
27
27
2011
12:47 AM
12
12
47
AM
PDT
Welcome.kairosfocus
October 27, 2011
October
10
Oct
27
27
2011
12:18 AM
12
12
18
AM
PDT
GB: The code physically instantiated in DNA is machine code, object code. (I would LOVE to see the DNA compiler! [Hint: it is NOT going to be molecular accidents filtered by trial and error for the needle in a haystack reasons just deiscussed.]) GEM of TKIkairosfocus
October 27, 2011
October
10
Oct
27
27
2011
12:17 AM
12
12
17
AM
PDT
eep, eep, cheep, cheep! dFSCI is: geqghyeqoeghqutg3itghjbgioer There, proved!kairosfocus
October 27, 2011
October
10
Oct
27
27
2011
12:14 AM
12
12
14
AM
PDT
GinoB:
. . . your made up, subjective ‘dFSCI’ metric
Have you done information theory at some point, and/or do you use it in your work? Let's start with basics: following Hartley's suggestion, info is measured since Shannon in 1948 in binary digits, i.e bits (this is where the abbreviation was introduced, sorry I go back to the era in which all of this was jargon for a weird field called telecommunications, and an associated one called digital electronics, with a bleed-over into a more exotic field called thermodynamics for which there is a whole informational approach school of thought that has been a controversial school for decades but is now getting much more respect). In effect -- cf my discussion in my always linked briefing note [through my handle], here and onward -- the number of possibilities for a field of configurations and the reasonable or observed distribution of outcomes was used to measure info: I = - log p, in bits if the log is to base_2. This was extended by Shannon to the case of average info per symbol, H. H is also a bridge to thermodynamics, as is now increasingly recognised. Let us consider some functional part, e.g. a car part, similar to the remarks just made to Dr Bot. It needs to be a fairly specific size, shape, and material etc to work. Work/fail -- as you know from say working with a car -- is a fairly objective matter. In engineering terms, there is a specification, with a certain degree of tolerance, that will be acceptable, and outside that range, the part will not work. There is a zone T from which actual cases E will work, and this is a part of a much wider range of possibilities W, where the overwhelming majority will not work. Most possible lumps of say mild steel of the same size of our part, will NOT work as an acceptable part. The concept of an island of acceptable function in a given context naturally emerges. (And so does the concept of an archipelago of related islands of function, e.g a similar part will work in other engines for different cars, but usually parts are not freely substitutable. Function is context-specific as a rule. Hence also the concept that for a multi-part entity where several well-matched parts have to work together just right to get a particular overall function, each being necessary and the core cluster being jointly sufficient, we have irreducible complexity of the function.) Without loss of generality [WLOG] all of this can be reduced to digital information, by imposing a structured set of yes/no decisions in the context of a mesh of nodes and arcs [which for a multi part system like a car engine, is hierarchical, i.e the nodes of a mesh at one level, can be expanded into meshes in turn, etc, leading to the classical exploded, "wiring" diagram so useful in assembly of a system like that]. In effect that is what a CAD package like AutoCAD does. That structured set of yes/no decisions gives us a natural measure of information in binary digits, or bits. In that context, digitally coded, functionally specific, complex information is quite meaningful. However, there is another context, in which the digital info is directly present. In text like this posted comment, we are using a set of glyphs that form a set of symbols, typically represented as ASCII, 7-bit digital code. A s-t-r-i-n-g of such symbols is also a natural structure, as you just saw. Similarly, for acceptable, intelligible text in say English or Italian, not gibberish, certain rules need to be pretty fairly adhered to. Some degree of tolerance may be there for typos and errors of grammar, but not that much, certainly not much compared tot he field of possibilities for a string of a given length, where each member of the string may take up 128 possibilities. The number of possibilities for a string of n elements is 128^n for ASCII characters, i.e. things run up very fast indeed. Prescriptive info, i.e step by step instructions for acts to be carried out, are very similar, and are familiar form computer programs, including these days markup for display e.g. HTML tags like those you see below the box where you type in a comment. Procedural languages extend this to all sorts of things, and that leads to the concept of a bug, whereby we see, again that there may be a cluster of acceptable configs, but the vast majority of possibilities are not going to work. We are right back to the concept of digitally coded, functionally specific, complex info. Complexity is obviously a function of the number of possibilities, and can be measured in various ways. A convenient way is to compare the number of possibilities for a given string of bits, usually 500 or 1,000 as threshold, with the number of possible Planck time quantum states [PTQS's]of the atoms of our solar system or the observed cosmos since their reasonable time of formation, or the like. (Cf a recent peer-reviewed discussion here.) In effect, the 10^57 atoms of our solar system, in 10^17s since formation, would have up to 10^102 PTQS's. This is 1 in 10^48 of the set of possibilities for 500 bits. Or, in familiar terms, you are taking a 1-straw sized sample from a field of possibilities equivalent to a cubical hay-bale 3 1/2 light-days across, the distance light would travel in that much time at 186,000 miles/s. A whole solar system could be lurking in that bale, and still sampling theory will tell you that you only have a right to expect to get what is TYPICAL, straw, not what is atypical. With 1,000 bits, it is much worse. Millions of universes the size of our observed universe could be lurking in a bale of the resulting size, and a 1-straw sample would even more overwhelmingly only reasonably would come up straw. There is one known exception to this pattern: where the sample is intelligently directed, i.e someone knows where to look to get needle not hay. So, despite the dismissive nonsense and vituperation -- some have come over to UD to toss out assertions like "fake," and I have no doubt that where they do not have to keep a civil tongue in their heads, it is much, much worse -- that you will see out there in the circle of ill-informed but angry attack sites, the basic concept of a metric of when such a search on blind chance and mechanical necessity will be credibly hopeless, is very useful and reasonable indeed. For reasons that are very close to the statistical foundations of the second law of thermodynamics. The discussion here shows a way to reduce, simplify and apply Dembski's metric in light of the above, based on several months of discussion here at UD with an earlier attempt to discredit the CSI concept and its metrics. Stating: Chi_500 = I*S - 500, in bits beyond the solar system threshold I is an info metric that is relevant, whether I = - log p or the like, or even a direct estimate based on the nature of an inherently digital situation [as Shannon also used]. S is a dummy variable that is 1/0 according as on reasonable grounds the object in question is highly specific or may take up any config it pleases. 501 coins tossed at random and arranged in a string will take up any particular value, most likely one near 50-50 distribution, and will be complex but nor specific. 501 coins have 501 bits of info storing capacity but under the circumstances S = 0, and so Chi_500 = -500. If the same coins are seen to be arranged in accord with the ASCII code for a statement in English, then that specification on function shifts matters dramatically. S = 1, and here we now see Chi_500 = 1, and the best explanation is the obvious one: design. The just linked discusses several biological cases, based on Durston et al and their recent peer-reviewed work on 35 protein families. Of ocurse, it is possible to program a computer to do teh equivalent of arranging 501 coins by hand, and that is what happens with programs that are often presented as demonstrating how such FSCI can arise by blind chance and mechanical necessity. Nope, as was discussed at length over the period a few months back, GAs START in a defined target zone, with a neatly arranged nice trendy fitness function, and then do some Mt Improbable style hill-climbing to peaks of the function. But, that has begged the question of how you arrived to begin with in such a convenient location, i.e on an island of function. THAT challenge is what [d]FSCI is about. And the answer is the ne we know for all observed GA's to date: the key info was built in by the designers. That is, GA's show the power of design. As the needle in the haystack issue will point out. BTW, the digital material in the heart of the living cell, DNA, starts out at about 100,000 - 1 mn bits for simplest actually observed life, and goes up to the billions for the more complex body plans. (If you want to hypothesise about a run-up to such life, please show us empirical cases of the spontaneous emergence of metabolising, self-replicating systems without undue experimenter intervention, from reasonable pre-life environments. We know from the unintended experiment of the caning industry, that spontaneous emergence of life in even quite rich prebiotic soups with conveniently homochiral environments is rather small. With many billions of test cases in point. That is, no-one has reported spontaneous emergence of novel life in such a can, after coming on 200 years of canning. And realistic prebiotic environments cannot assume homochirality or that degree of concentration, both of which have exponential effects on making the reactions much more likely.) So, dFSCI is not a suspect concept or metric expression. Just, it gives a message that is not very welcome to the institutionally dominant school of thought on origin of life or of body plans. But then, 200 years ago or so, Wilberforce was a spokesman for a controversial and tiny minority. GEM of TKIkairosfocus
October 27, 2011
October
10
Oct
27
27
2011
12:12 AM
12
12
12
AM
PDT
1 2 3 4 5

Leave a Reply