Uncommon Descent Serving The Intelligent Design Community
progflow

Darwinism from an informatics point of view

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

progflowAs everyone knows, life in all its countless instances (organisms) involves internal instructions, as well as processors that run them. Without these instructions, no organism would be able to originate in the first place, let alone develop or survive. The discovery of these instructions – contained in DNA/RNA macromolecules and the molecular machinery that reads and writes them in biological cells – has been hailed as one of the greatest theoretical and experimental breakthroughs of the 20th century. The ID movement claims that these scientific findings have only served to highlight the weaknesses and inconsistencies of the neo-Darwinian theory of macro-evolution, according to which all species have evolved from a common ancestor, as a result of random mutation and natural selection.

The discovery of complex information processing in biology invites the question of whether there are any significant similarities between bio-informatics and the artificial informatics of computers, i.e. so-called computer science. Given that in both fields information has to be managed and processed, some similarities must of course exist. In this post, I will attempt to outline some conclusions on this topic, which lead us inexorably to the conclusion that Darwinian theory is incapable in principle of explaining the mystery of the origin of life and of species, as it claims to do.

When we consider the development of organisms and their complex internal organs and biological systems, we can easily see that these developmental sequences – and here I am talking about both ontogenetic and phylogenetic sequences – must involve complex programs, which embody decision logic about what has to be assembled, and also when and where it should be assembled. In other words, the right things need to be put in the right place at the right time, according to a precise schedule which is in some respects even more rigorous than schedules used in human engineering. For example, the development of an embryo is a process whose countless steps need to be choreographed in their most minute details by a program that is oriented towards the final result. Any error in the execution of this program may have severely deleterious consequences. The same thing can be said regarding the alleged macroevolution of new kinds of organs or even new body plans.

Given that biology and informatics both make use of programs, it will be necessary for me to say a few things about computer programming, in order to explain as clearly as possible exactly what a program is. I know that a lot of UD readers are software developers, so the points I will be making below will be very obvious to them. However, I’ll have to ask them to bear with me, as some of our readers are laypeople in these fields.

In order to process information – i.e. create software – it is necessary to create data and programs. Data is passive information: it cannot change or decide anything by itself. For example, let’s say I have a string variable (called $a) and I set it to contain the value “something” – or maybe I have a numeric variable $b which I set to contain the value 3.14. In these cases, I am neither specifying what should be done with the set values, nor when it should be done. Hence if I were to confine my work as a programmer to simply declaring the values of passive data, I would never be able to actively run a program or control any of its processes. Putting it another way: a program, in its simplest concept, is a blueprint specifying the reiteration of basic decision structures, about what to do and when to do it. A program must specify conditions and actions forming a control structure:

conditions (when to do it)
{

actions (what to do)

}

In other words, a program is active information. Since it determines conditions and actions, it has to be able to decide and organize things, and it also has to be able to create and change data. A program implies a decision hierarchy – in a word, a “logic”. It states what to do, when certain particular conditions arise. Once a program is designed, its execution by a processor can be used to control data and processes of any kind.

The simple structure described above can be repeated many times and can also be nested to create very complex structures with multiple nesting layers, such as the following example, with three nesting levels (the indentations and carriage returns have been inserted to help the reader understand the program flow, but are irrelevant per se at the level of machine code):

conditions
{

actions
conditions;
{

actions;
conditions;
{

actions;
conditions;
{

actions;

}

}

}

}

Another important concept of programming is that of the sub-function or sub-routine:

function
{

}

The main program can reference and run a sub-function as follows:

conditions
{

actions
&function

}

where “&” is the symbol for referencing.

A sub-routine is a sub-program (or “child” program) of the parent program (usually called “main”) that invokes it, which can be referenced (i.e. used indirectly, thanks to a pointer that points to it). Two important things to note about sub-functions are that they work only if they exist somewhere within the software (a very obvious point) and that they are “called” by the main program. In other words, even if we have entire libraries of sub-functions, they will be useless if they are never called: they will be “dormant software”. Thus in a sense, dormant sub-functions constitute passive information. They are passive because they still require a caller that can run them. A sub-function which is never called does absolutely nothing.

From another point of view, programming can be defined as whatever implements control of a process. Since – as Michel Behe says – the fundamental problem of biochemistry and molecular biology (and, in the final analysis, of systems biology) is the problem of control, it follows that programming is indispensable in biology, where countless complex and concurrent processes are involved. Because multiple processes are running at the same time in biological systems – a property that scientists refer to as concurrency – there must be some higher level of direction that governs them all.

It should be noted that the conclusions obtained above hold quite independently of whether an organism’s biological instructions are completely contained within its genome, or only partially. There are many (and I would count myself among them) who suspect that the genome, by itself, does not contain enough information to account for the overall biological complexity of an organism. However one thing is certain: the assembly instructions of living beings must exist somewhere, and the science of generating instructions (computer science) can help us understand their organization and fabrication.

Modern evolutionary theory proposes several unguided mechanisms in order to explain the alleged global macroevolution of species from a single common ancestor: random genetic mutations, sexual genetic recombination, horizontal gene transfer, gene duplication, genetic drift, and so on. According to evolutionary theory, the output of all of these blind processes is subsequently processed (or filtered) by natural selection, which allows only the fittest to survive and reproduce. However, as we will see below, not one of these processes is capable of generating programs. Hence they are also incapable of creating new organs, new body plans, or even new species.

The concept of the gene is fundamental to evolutionary theory in particular, and to genetics and biology in general. Despite its importance, we are still a long way from a clear definition of what a gene is. From the old definition of “recipe for a protein” to the new definition of “functional unit of the genome,” the concept of gene has evolved to the point where some researchers now openly declare that “a gene is a unit of both structure and function, whose exact meaning and boundaries are defined by the scientist in relation to the experiment he or she is doing.” In practice, this means that a gene is whatever a particular scientist has in mind when he/she is doing a particular experiment.

The argument which I am putting forward here cuts through these definitional controversies, because from my informatics-based perspective there are really only two possibilities, which can be summarized as follows: either (a) genes are data (which corresponds to the above old definition of a gene); or (b) genes are functions (which corresponds to the new definition). The key point to understand here is that the development of new organs or body-plans (macroevolution) necessarily involves new decision logic, i.e. new hierarchies of nested control structures. Specifically, the architectural complexity (at the system level) of new organs or body-plans and their embryogenesis involves assembly instructions which require advanced-level control, and hence advanced programming.

Let’s suppose that the first option is correct, and that genes are data. In this case, it can easily be demonstrated that point random mutations, sexual recombination, horizontal gene transfer and data duplication are all incapable of creating the hierarchical decision logic of the main program. In fact, data is what the main program elaborates. Data is passive, while the program is active. What is passive cannot create what is active. This is just as true for intelligently designed data as it is for the data upon which the random operations of Darwinian evolution are applied.

We can illustrate this point from another perspective, by using the analogy of the bricks in a building. If genes are data containing only “recipes for proteins,” and proteins are the “bricks” of the organism “building,” then it is obvious that genes/bricks (and the random Darwinian operations performed upon them) cannot account for the construction and assembly of the organism/building – that is, the set of rules and instructions specifying the way in which the various bricks have to assemble together, in order to yield the unity of a complete system. The building construction metaphor also helps us understand why different organisms can have almost the same genetic patrimony. Just as the same bricks can be used to construct entirely different buildings, the same genes can be used to develop entirely different organisms. In other words, in both biology and architecture, what matters are not the basic building blocks, but rather the higher-level instructions which operate upon them.

Now let’s consider the second alternative, which is that genes are equivalent to software sub-functions. This is quite a generous assumption for evolutionists to make, because it implies that genes possess their own internal decision logic, without explaining how they acquired it. In reality, the so-called “regulatory regions” of genes probably don’t warrant being described as true algorithms. But even if genes were the equivalent of software functions, then once again, random mutations, sexual recombination, horizontal gene transfer and duplication of functions would still be incapable of creating hierarchical decision logic. Why not? Because the decision logic contained in the main program is what invokes the functions (by referencing them). Just as a hammer or a drill cannot create a carpenter, the above operations on functions are incapable of creating their user.

Let us note in passing that the classic evolutionist objection that a mutation involving only a few bits (or even a single bit) is capable of triggering major changes (evolutionists typically cite homeobox genes that control some configurations of the body plan, etc.) contains another misunderstanding. For the active information for these changes still has to exist somewhere, and it must be as large as the changes require it to be. It is true that a programmer can write a very short “wrapper program” to trigger large changes, but that doesn’t mean that the changes themselves require only a little information to specify. For example, I can write a short piece of code which I choose to run on my computer – say, a word processor or a chess program. This code is a few bits long, but the word processor and the chess program are really large programs. All the function does is point to or reference them. However, the function doesn’t create the active information contained in the word processor or chess program software; rather, it simply switches control between the two. Hence there is no free-lunch creation of information whatsoever here.

Leaving aside the problems associated with defining what a gene is, it can still be shown that the random processes which evolutionary theory claims are capable of generating biological complexity, simply don’t work. They don’t work because they are, by their very nature, incapable of generating the top-down functional hierarchy of nested decision structures that is responsible for making the whole system. Since this objection to the adequacy of random processes is an in-principle objection, it is useless for evolutionists to attempt to counter it by resorting to vast amounts of time or huge probabilistic resources. The fundamental problem of Darwinism is that the greater cannot come from the less.

To sum up: Darwinism, from an informatics point of view, has absolutely zero credibility. This explains, among other things, why so many computer programmers who are interested in the ID/evolution debate are on the ID side. In their own job they have never seen a single bit of software arise gratis. Rather they have to create, bit by bit, the active information of the software applications they develop. These people are justifiably perplexed when they encounter the evolutionist claim that God did not have to write a single line of code, because biological complexity (which is far greater than any computer software) arose naturalistically. “Why no work for Him and so much work for me?” they may ask. In this post, I hope I have helped explain that God, also in this case, expects far less from us than what He Himself did and does.

Comments
What about this statement: Trying to map our model of processing on that performed by the weather would probably result in a very inaccurate model.”JT
May 20, 2010
May
05
May
20
20
2010
09:15 AM
9
09
15
AM
PDT
I just read 11. It seems like you mean 'Yes':
You have written an OP which tries to make an analogy between the process of life and data processing. Trying to map our model of processing on that performed by life would probably result in a very inaccurate model.".
To slighltly reword the above, do you also ascribe to the following: "Trying to map any existing model of data processing on that performed by life would probably result in a very inaccurate model."JT
May 20, 2010
May
05
May
20
20
2010
09:08 AM
9
09
08
AM
PDT
No. See my response @11.Toronto
May 20, 2010
May
05
May
20
20
2010
08:39 AM
8
08
39
AM
PDT
So Toronto are you saying that any argument with a premise that biological life is computable should be rejected.JT
May 20, 2010
May
05
May
20
20
2010
08:25 AM
8
08
25
AM
PDT
niwrad @20,
When you have to create organisms (able to live, self-reproduce, self-repair, survive, etc.) you necessarily have to elaborate symbolic instructions.
Why symbolic? I can process an analog input voltage with a digital processor using symbolic instructions or with an analog process that doen't need any symbolic content at all. If I put this in a black box, you couldn't tell which process it was. If you opened the cell and showed me a digital CPU or equivalent that translated chemical inputs into symbolic information, proceesed it, and then converted symbolic values into chemical outputs,I would buy into your analogy, but you need to show me that. What you have done is base a conclusion on an analogy which is so far removed from reality that the conclusion you draw, is no longer valid.Toronto
May 20, 2010
May
05
May
20
20
2010
08:07 AM
8
08
07
AM
PDT
Toronto and aqeels Nobody claims that the information processing in cells is identical to the actual information and robotics technology. Don’t go to search for Intel CPUs or Maxtor hard-disks inside the cells. Don’t ask if the biological instructions are written in C or Java either. However, beyond the details, any kind of information processing involves some common principles. When you have to create organisms (able to live, self-reproduce, self-repair, survive, etc.) you necessarily have to elaborate symbolic instructions. To do that you need at least a memory, some input output devices, a universal constructor, a control unit, a language. Not for chance the same things are necessary to create robots. Note that actually robots don’t have the above advanced features of life (for this reason they can miss the universal constructor). However, for what they do, they just imply the above informatics principles. The implementation of these principles is designed by computer science, robotics and AI engineers. Hence the aim of my post was: whether robotics/informatics is designed the same biology has to be designed because both implement the same principles. Since Darwinism denies any design my conclusion was that Darwinism is necessarily untrue from the informatics perspective. Any difference in details doesn’t change this conclusion because it is based on principles not on details.niwrad
May 20, 2010
May
05
May
20
20
2010
07:00 AM
7
07
00
AM
PDT
Mr BA^77, As for Nak’s bluff that the deep blue chess program proves that computers are more intelligent than people: That is not a very accurate gloss of what I said. Programs (such as Deep Blue) can acheive competencies that their programmers do not have. Sooner Emeritus notes on another thread that Blondie24 might be a more apt example, since it evolved its competency. This contradicts what Mr Niwrad was proposing.Nakashima
May 20, 2010
May
05
May
20
20
2010
06:12 AM
6
06
12
AM
PDT
Mr Niwrad, At the risk of sounding repetitious, your original argument was an analogy, and based on a dichotomy between data and program. My reply showed that this logical distinction was false. A reply of the kind "But you can't make much money with it!" is inappropriate. By directing your attention to the Microsoft issue, you bypass replying on NASA'a antenna building or the Humie awards. That is unfortunate, since the antenna example was a more direct response to your assertion that GP isn't practical. But since it is fun, let's talk about Microsoft a little more. You write: to design a genetic algorithm that will output the final program is far more expensive than to design directly the final program. If by this you mean the user interface, the use of operating system resources, etc. then I agree with you. EAs are economically attractive when they can test many alternatives quickly and cheaply, relative to some other method. For example, NASA didn't actually build thousands of antennas, they simulated their responses using another standard piece of software. They only built a few of the high performing champions. It is difficult to design a UI this way, because you have to test a UI with real people. Is Microsoft wrong for using multiple coding methods? Will GP only be successful when Bill Gates uses it to brush his teeth? No.Nakashima
May 20, 2010
May
05
May
20
20
2010
04:17 AM
4
04
17
AM
PDT
As for Nak's bluff that the deep blue chess program proves that computers are more intelligent than people: "GilDodgen One of my AI (artificial intelligence) specialties is games of perfect knowledge. See here: worldchampionshipcheckers.com In both checkers and chess humans are no longer competitive against computer programs, because tree-searching techniques have been developed to the point where a human cannot overlook even a single tactical mistake when playing against a state-of-the-art computer program in these games. On the other hand, in the game of Go, played on a 19×19 board with a nominal search space of 19×19 factorial (1.4e+768), the best computer programs are utterly incompetent when playing against even an amateur Go player. When it comes to true intelligence — even something as seemingly simple as evaluating the grammatical correctness or meaning of a trivial sentence — the best computer programs are less than worthless. I turn off my Microsoft Word grammar checker, because it is wrong almost all of the time, and its suggestions are almost universally laughably stupid. The notion that random errors filtered by natural selection created the human mind — which is capable of creating language and interpreting it — is laughably stupid raised to the 768th power." https://uncommondescent.com/intelligent-design/epicycling-through-the-materialist-meta-paradigm-of-consciousness/#comment-353454 further note: The Capabilities of Chaos and Complexity: David L. Abel - Null Hypothesis For Information Generation - 2009 To focus the scientific community’s attention on its own tendencies toward overzealous metaphysical imagination bordering on “wish-fulfillment,” we propose the following readily falsifiable null hypothesis, and invite rigorous experimental attempts to falsify it: "Physicodynamics cannot spontaneously traverse The Cybernetic Cut: physicodynamics alone cannot organize itself into formally functional systems requiring algorithmic optimization, computational halting, and circuit integration." A single exception of non trivial, unaided spontaneous optimization of formal function by truly natural process would falsify this null hypothesis. http://www.mdpi.com/1422-0067/10/1/247/pdf http://mdpi.com/1422-0067/10/1/247/ag ---------------- Brooke Fraser- “C S Lewis Song” http://www.youtube.com/watch?v=GHpuTGGRCbYbornagain77
May 20, 2010
May
05
May
20
20
2010
03:58 AM
3
03
58
AM
PDT
niwrad actually Nak is a terrible poker player since we know for a fact he is going to bluff on every single hand while never holding a winning hand. In the following podcast, Robert Marks, a leading expert in the area of evolutionary algorithms, gives a very informative talk as to the strict limits we can expect from any evolutionary computer program (evolutionary algorithm): Darwin as the Pinball Wizard: Talking Probability with Robert Marks - podcast http://www.idthefuture.com/2010/03/darwin_as_the_pinball_wizard_t.html Further note: A comparative approach for the investigation of biological information processing: An examination of the structure and function of computer hard drives and DNA – David J D’Onofrio1, Gary An – Jan. 2010 Excerpt: It is also important to note that attempting to reprogram a cell’s operations by manipulating its components (mutations) is akin to attempting to reprogram a computer by manipulating the bits on the hard drive without fully understanding the context of the operating system. (T)he idea of redirecting cellular behavior by manipulating molecular switches may be fundamentally flawed; that concept is predicated on a simplistic view of cellular computing and control. Rather, (it) may be more fruitful to attempt to manipulate cells by changing their external inputs: in general, the majority of daily functions of a computer are achieved not through reprogramming, but rather the varied inputs the computer receives through its user interface and connections to other machines. http://www.tbiomed.com/content/7/1/3 The Capabilities of Chaos and Complexity - David L. Abel Excerpt: "To stem the growing swell of Intelligent Design intrusions, it is imperative that we provide stand-alone natural process evidence of non trivial self-organization at the edge of chaos. We must demonstrate on sound scientific grounds the formal capabilities of naturally-occurring physicodynamic complexity. Evolutionary algorithms, for example, must be stripped of all artificial selection and the purposeful steering of iterations toward desired products. The latter intrusions into natural process clearly violate sound evolution theory." http://www.mdpi.com/1422-0067/10/1/247/pdf Arriving At Intelligence Through The Corridors Of Reason (Part II) - April 2010 Excerpt: Summarizing the status quo, Johnson notes for example how AVIDA uses “an unrealistically small genome, an unrealistically high mutation rate, unrealistic protection of replication instructions, unrealistic energy rewards and no capability for graceful function degradation. It allows for arbitrary experimenter-specified selective advantages”. Not faring any better, the ME THINKS IT IS LIKE A WEASEL algorithm is programmed to direct a sequence of letters towards a pre-specified target. https://uncommondescent.com/intelligent-design/arriving-at-intelligence-through-the-corridors-of-reason-part-ii/ Accounting for Variations - Dr. David Berlinski: - video http://www.youtube.com/watch?v=aW2GkDkimkE LIFE’S CONSERVATION LAW - William Dembski - Robert Marks - Pg. 13 Excerpt: Simulations such as Dawkins’s WEASEL, Adami’s AVIDA, Ray’s Tierra, and Schneider’s ev appear to support Darwinian evolution, but only for lack of clear accounting practices that track the information smuggled into them.,,, Information does not magically materialize. It can be created by intelligence or it can be shunted around by natural forces. But natural forces, and Darwinian processes in particular, do not create information. Active information enables us to see why this is the case. http://evoinfo.org/publications/lifes-conservation-law/ Conservation of Information in Computer Search (COI) - William A. Dembski - Robert J. Marks II - Dec. 2009 Excerpt: COI puts to rest the inflated claims for the information generating power of evolutionary simulations such as Avida and ev. http://evoinfo.org/publications/bernoullis-principle-of-insufficient-reason/ Evolutionary Synthesis of Nand Logic: Dissecting a Digital Organism - Dembski - Marks - Dec. 2009 Excerpt: The effectiveness of a given algorithm can be measured by the active information introduced to the search. We illustrate this by identifying sources of active information in Avida, a software program designed to search for logic functions using nand gates. Avida uses stair step active information by rewarding logic functions using a smaller number of nands to construct functions requiring more. Removing stair steps deteriorates Avida’s performance while removing deleterious instructions improves it. http://evoinfo.org/publications/evolutionary-synthesis-of-nand-logic-avida/ The Problem of Information for the Theory of Evolution – debunking Schneider's ev computer simulation Excerpt: In several papers genetic binding sites were analyzed using a Shannon information theory approach. It was recently claimed that these regulatory sequences could increase information content through evolutionary processes starting from a random DNA sequence, for which a computer simulation was offered as evidence. However, incorporating neglected cellular realities and using biologically realistic parameter values invalidate this claim. The net effect over time of random mutations spread throughout genomes is an increase in randomness per gene and decreased functional optimality. http://www.trueorigin.org/schneider.asp "There has been increasing recognition that genes deal with information processing. They have been referred to as "subroutines within a much larger operating system"." Kirk Durstonbornagain77
May 20, 2010
May
05
May
20
20
2010
03:27 AM
3
03
27
AM
PDT
Nakashima #15,16 My friend, you would be a terrific poker gambler. I love when you bluff. Research.microsoft.com contains all sorts of things, included papers on "Pictures from the birdpark at Iguasu Falls" and "Eroticism and the night: Sensual, rhythmic and risky design". So no wonder it contains articles on genetic programming too. Microsoft has produced Terabytes of software. Not a single bit of them is obtained by genetic programming. This is not a pragmatic ascertainment, rather a conceptual issue: to design a genetic algorithm that will output the final program is far more expensive than to design directly the final program. If genetic programming (or whatever machine) really create new information as you say Microsoft would not have the thousands human developers it has. Bill Gates likes to save dollars.niwrad
May 20, 2010
May
05
May
20
20
2010
02:54 AM
2
02
54
AM
PDT
Mr BA^77, Bill Gates does not employ random number generators and selection software to devise “more evolved” computer programs. A quick search of research.microsoft.com for "evolutionary algoirthm" suggests otherwise. You may be particularly interested in Finding a Better-than-Classical Quantum AND/OR Algorithm using Genetic Programming.Nakashima
May 19, 2010
May
05
May
19
19
2010
07:46 PM
7
07
46
PM
PDT
Mr Niwrad, Not for chance in the software industry genetic programming has no practical use. From the theoretical point of view genetic programming doesn’t prove unguided evolution because in genetic algorithms there is no real creation of active information. Sorry, you can't rescue an argument based on analogy and logic by an appeal to pragmatic considerations. Genetic programming is a well developed field in computer science, and they give out cash prizes every year for programs that have evolved capabilities better than human. Such programs certainly are examples of the creation of "real" active information - they are capable of functions the programmer did not have. Trivially, we know that Deep Blue played chess better than any of its programmers. We know GP systems discover novel ways of solving problems that their programmers did not know existed and can be hard put to explain. Far from being incapable of novelty, they often sample the design space more widely than a set of human generated designs do. This web page is a good discussion with references.Nakashima
May 19, 2010
May
05
May
19
19
2010
07:30 PM
7
07
30
PM
PDT
niwrad, I really hope you, or someone, does deal with this in a future post and would be very interested specifically in a programmers take on this: (especially given what you already outlined in this post) You are right, The +9x% pseudo-fact has been countered on different fronts, mainly by the inclusion of "junk DNA" segments with the finding of +90% functionality of the genome. In fact, If the entire genome is considered the "pseudo fact" of genetic similarity is cut down to 70%: Chimpanzee? 10-10-2008 Dr Richard Buggs Excerpt: Therefore the total similarity of the genomes (between chimps and Humans) could be below 70%. http://www.idnet.com.au/files/pdf/Chimpanzee.pdf But of more interest to you, you stated Without a top direction that provides instructions to the bottom executers no complex system can be organized. Thus exactly as you are seeing things niwrad, it is found that the bottom executers (similar genes) output different protein sequences between chimps and Humans, : Eighty percent of proteins are different between humans and chimpanzees; Gene; Volume 346, 14 February 2005: http://www.ncbi.nlm.nih.gov/pubmed/15716009 as well the timing of execution for the genes is found to vastly different A Primer on the Tree of Life (Part 4) Excerpt: "In sharks, for example, the gut develops from cells in the roof of the embryonic cavity. In lampreys, the gut develops from cells on the floor of the cavity. And in frogs, the gut develops from cells from both the roof and the floor of the embryonic cavity. This discovery—that homologous structures can be produced by different developmental pathways—contradicts what we would expect to find if all vertebrates share a common ancestor. - Explore Evolution http://www.evolutionnews.org/2009/05/a_primer_on_the_tree_of_life_p_3.html#more further note: The Unbearable Lightness of Chimp-Human Genome Similarity Excerpt: One can seriously call into question the statement that human and chimp genomes are 99% identical. For one thing, it has been noted in the literature that the exact degree of identity between the two genomes is as yet unknown (Cohen, J., 2007. Relative differences: The myth of 1% Science 316: 1836.). ,,, In short, the figure of identity that one wants to use is dependent on various methodological factors. http://www.evolutionnews.org/2009/05/guy_walks_into_a_bar_and_think.html#more Human Genes: Alternative Splicing (For Proteins) Far More Common Than Thought: Excerpt: two different forms of the same protein, known as isoforms, can have different, even completely opposite functions. For example, one protein may activate cell death pathways while its close relative promotes cell survival. http://www.sciencedaily.com/releases/2008/11/081102134623.htm Human genes are multitaskers: Abstract: Genome-wide surveys of gene expression in 15 different tissues and cell lines have revealed that up to 94% of human genes generate more than one (protein) product. http://www.nature.com/news/2008/081102/full/news.2008.1199.html off topic: here are a few hundred new Contemporary Christian Music Videos: http://new.music.yahoo.com/videos/7318725;_ylt=Auv7LfOwZ8obpVsUJHa7U7qxvyUv?cat=7318725&page=1bornagain77
May 19, 2010
May
05
May
19
19
2010
03:07 PM
3
03
07
PM
PDT
niwrad @ 11 I agree that biologists will pursue this field. But it's not just an ID mindset that they need but possibly a complete paradigm shift away from conventional models (like the software and hardware analogies) that may yield the greatest discoveries. I just mean that we as ID proponents need to keep an open mind and not be fooled by what we see. Otherwise I could not agree with you more!aqeels
May 19, 2010
May
05
May
19
19
2010
02:29 PM
2
02
29
PM
PDT
bornagain77 #6 As you have rightly noted, the point I stressed in my post (the difference between data and their elaboration) has something to do with phenomena as the alternative splicing, where the same or very similar genomic data can generate very different outputs depending on the mechanisms involved (for example the same genes codifying for different proteins). In particular it is obvious that the +9X% similarity between chimps and humans genomes loses much of its appeal if their elaboration is very different. I think that there would be a lot to say about the methods of genomic comparisons in general and between chimps and humans in particular. In other words the +9X% pseudo-fact (as you call it) can be countered per se also without taking into consideration the difference in programming coding (may be I will deal with this topic in another post).niwrad
May 19, 2010
May
05
May
19
19
2010
02:07 PM
2
02
07
PM
PDT
niwrad & aqeels, aqeels:
The devil is always in the detail and I would wager that the human body operates in ways that are so alien to our own models of computation and decision making that the old analogies we used to use, prove to be wrong.
While aqeels and I appear to be on different sides on the evolution debate, we both seem to agree that your analogy doesn't come close enough to be of any use, to the point of being misleading. For instance; - what about shared memory, - semaphores, - is there a real-time element, - is it a multi-processor environment? You have written an OP which tries to make an analogy between the process of life and data processing. Trying to map our model of processing on that performed by life would probably result in a very inaccurate model. aqeels:
I think the safest analogy we can make is that at the heart of things is information, and that biological entities must process it like a black box.
I have no problem with the above statement at all despite being an atheist, (but a humble one).Toronto
May 19, 2010
May
05
May
19
19
2010
01:46 PM
1
01
46
PM
PDT
niwrad, "the informatics analogy is worth considering." You are in good company. So does Bill Gates: Bill Gates, in recognizing the superiority found in Genetic Coding, compared to the best computer coding we now have, has now funded research into this area: Welcome to CoSBi - (Computational and Systems Biology) Excerpt: Biological systems are the most parallel systems ever studied and we hope to use our better understanding of how living systems handle information to design new computational paradigms, programming languages and software development environments. The net result would be the design and implementation of better applications firmly grounded on new computational, massively parallel paradigms in many different areas. http://www.cosbi.eu/index.php/component/content/article/171 of note: Bill Gates does not employ random number generators and selection software to devise "more evolved" computer programs. --------------- Every Bit Digital DNA’s Programming Really Bugs Some ID Critics - March 2010 Excerpt: In 2003 renowned biologist Leroy Hood and biotech guru David Galas authored a review article in the world’s leading scientific journal, Nature, titled, “The digital code of DNA.” The article explained, “A remarkable feature of the structure is that DNA can accommodate almost any sequence of base pairs—any combination of the bases adenine (A), cytosine (C), guanine (G) and thymine (T)—and, hence any digital message or information.” MIT Professor of Mechanical Engineering Seth Lloyd (no friend of ID) likewise eloquently explains why DNA has a “digital” nature: "It’s been known since the structure of DNA was elucidated that DNA is very digital. There are four possible base pairs per site, two bits per site, three and a half billion sites, seven billion bits of information in the human DNA. There’s a very recognizable digital code of the kind that electrical engineers rediscovered in the 1950s that maps the codes for sequences of DNA onto expressions of proteins." http://www.salvomag.com/new/articles/salvo12/12luskin2.phpbornagain77
May 19, 2010
May
05
May
19
19
2010
01:25 PM
1
01
25
PM
PDT
Toronto #1 and aqeels #4 You seem to share the objection that the informatics analogy actually has little empirical support. It is true that the job of complete reverse engineering of biological systems has still to be fully accomplished. It is the target of the biologists of the XXI century (and they will succeed only by acquiring an ID mindset). However it seems already acknowledged that the genome is sort of read/write storage medium containing symbolic data. The read/write operations are not random, rather driven by processes according to specific necessities of the cell and imply an addressing mechanism and an encoder/decoder ability. This is similar to the relation that exists between control unit and memory in a computer. The control unit must necessarily host (or anyway execute) a main blueprint or schedule governing all the sub-processes. Without a top direction that provides instructions to the bottom executers no complex system can be organized. It seems to me that, also based on these few elements, the informatics analogy is worth considering.niwrad
May 19, 2010
May
05
May
19
19
2010
01:00 PM
1
01
00
PM
PDT
Nakashima #3
You write as if the entire field of genetic programming did not exist.
Not for chance in the software industry genetic programming has no practical use. From the theoretical point of view genetic programming doesn’t prove unguided evolution because in genetic algorithms there is no real creation of active information.
You further assume that what is passive or unused can never become active or vice versa. But we know that this does happen, both in real computer systems and in biology. In computer systems we see the problem of buffer overruns leading to data being executed. This is done intentionally by computer hackers. In biology we see stop codons being mutated or lost due to copy problems, with the result that reading continues into areas of the genome that were previously passive. The point is that your neat logical dichotomy between programs and data does not exist in the real world, either in computer science or in biology. Therefore the conclusions you draw from it are invalid.
Buffer overruns in computers and DNA overruns in biology don’t cause the creation of information, rather it is likely they generate failures. Eventually you can consider the generation of crashes a form of activity, but what I had in mind is constructive not destructive activity. The distinction between data and programs is an aspect of the distinction between instructions and their processor. Both are necessary: instructions without processor do nothing; processor without instructions does nothing. To say that this distinction doesn’t exist in the real world is as to say that agents and actions don’t exist because they are the same thing. My point was that blind evolution cannot work because it can randomly rearrange data only, while the problem of biology is to create the couple data and their manager.niwrad
May 19, 2010
May
05
May
19
19
2010
11:55 AM
11
11
55
AM
PDT
correction, this should read: it seems that a “unique higher level programming code of active information” is generated deciphered by their methodology for each species by taking into consideration the entirety of exon and intron sequence dissimilarity found between species.bornagain77
May 19, 2010
May
05
May
19
19
2010
10:53 AM
10
10
53
AM
PDT
niwrad, excellent post. and from this quote of yours:,,, "However, as we will see below, not one of these processes is capable of generating programs. Hence they are also incapable of creating new organs, new body plans, or even new species." ,,,I think You may be able to help me with something we were talking about yesterday in this post: A Code That Isn’t Universal https://uncommondescent.com/intelligent-design/the-code-within-the-code/ specifically this at comment 3 and 4: Canadian Team Develops Alternative Splicing Code from Mouse Tissue Data Excerpt: “Our method takes as an input a collection of exons and surrounding intron sequences and data profiling how those exons are spliced in different tissues,” Frey and his co-authors wrote. “ The method assembles a code that can predict how a transcript will be spliced in different tissues. ” Thus niwrad, as far as I can see, which ain't so far, it seems that a "unique higher level programming code of active information" is generated by their methodology for each species that takes into consideration the entirety of exon and intron sequences dissimilarity found between species.,,, As you know evolutionists are very keen on highlighting the alleged +98% similarity between chimps and humans by misleadingly genomic comparisons that are very selective in "what and when" they will compare. thus I thought that someone with a programming background, as well as a basic understanding of biology, someone like you , might be able to find something to counter that +98% psuedo-fact by highlighting the difference in programming coding: This line of investigation, for highlighting a dramatic "programming" difference between humans and chimps, looks promising because: Modern origin of numerous alternatively spliced human introns from tandem arrays - 2006 Abstract excerpt: A comparison with orthologous regions in mouse and chimpanzee suggests a young age for the human introns with the most-similar boundaries. Finally, we show that these human introns are alternatively spliced with exceptionally high frequency. http://www.pnas.org/content/104/3/882.full If you are interested and think this may worth looking into further could you let me know?bornagain77
May 19, 2010
May
05
May
19
19
2010
10:05 AM
10
10
05
AM
PDT
niwrad: very good post indeed. I really like the clarity with which you sum up some fundamental truths which are usually misunderstood or just ignored. I have tried many times to state the obvious fact that traditional protein coding genes are only the final effectors of biological information. It is obvious that, if you change a final effector, you can cause a deep change in the final result: that is the case for instance in mutations in homeobox genes, and in many single mutation diseases. But, as you state very clearly, in no way that means that the final effector contains all the information implied in the final result. The problem is that we know a lot about final effectors (protein genes and proteins), and very little, indeed almost nothing, about what I would call "the real code", or "the procedures", in exactly the same sense you use for the main program and its sub-routines. Where is this code? Where is this information? We really don't know. And, as you very correctly state, that problem of "where is the code" has deep implications both for phylogenesis and for ontogenesis. In other words, not only we don't know how species, phyla, body plans etc originated; we really don't understand how a complex multicellular being originates from a single cell containing exactly the same genome as all its multiple differentiated daughter cells. Again, where is the information? Is it mainly genomic, or epigenetic? And how is it encoded? Today we know many things, but they are still really only a little bit of the general picture. We know the importance of transcriptomes in individual cells. But transcripptomes are really mainly the effect of transcription factors. Which are proteins. So, what controls transcription factors? What is the cause of transcription factors' transcriptomes at any given moment? We know that non coding DNA and non translated RNA must have an important role. But non coding DNA remains mainly a mystery. One thing is to assume that transposons and repetitive sequences may have an important role in shaping genetic information (I comletely agree with that assumption). Another thing is to have a detailed idea of how that may happen, and of where the information is coded which controls transposons and repetitive sequences. The fact remains that in ontogenesis a single mass memory of data (the genome) in some way is used to realize a lot of sub-programs (the individual sequence of transcriptomes in each specialized cell type). The current explanations for how that may happen are absolutely unsatisfying, as thogh the mere sequence of genes could determine, with a little help from random outer inputs, the whole procedure. You can easily see how unlikely that is. There are some interesting approaches about second codes, the importance of spacial conformation of DNA segments, and so on. But the problem remains the same: the genome in each cell of a single multicellular individual (with few important exceptions) is one and only one, and in itself it is static. Somehow, living cells use it appropriately in extremely varied contexts and for extremely varied sub-routines. Which information guides them? I think it's truly a pity that at present we know so little about the procedures and their information content. The more we know about that, the less credible the idea will be that such information was generated by darwinian, or neo-darwinian, or neo-neo-derwinian mechanisms.gpuccio
May 19, 2010
May
05
May
19
19
2010
09:38 AM
9
09
38
AM
PDT
niwrad - I'm in IT as a chosen profession and appreciate your analogies. However I would have to point out that whilst these analogies are useful for illustrating certain points they cannot be extrapolated indefinitely without empirical support (outside of simple analogies). I mean if we say that a sub routine must be called from the main program then we are obliged to show where the equivalents can be found within biology. A sub routine for example makes use of a structure called a heap, which is a temporary memory holding area that operates on a LIFO (last in first out) basis, thereby allowing efficient nesting of function, so once again the question will be to find the equivalent arrangement within biology. The devil is always in the detail and I would wager that the human body operates in ways that are so alien to our own models of computation and decision making that the old analogies we used to use, prove to be wrong. I think the safest analogy we can make is that at the heart of things is information, and that biological entities must process it like a black box. We are only beginning to understand the mechanics of life so we should be humble to our core (something the evolutionists are not).aqeels
May 19, 2010
May
05
May
19
19
2010
08:55 AM
8
08
55
AM
PDT
Mr Niwrad, You write as if the entire field of genetic programming did not exist. Within the field of computer science, the duality of programs and data is well known. In fact you make use of it yourself when referring to unused functions. Your presentation of the ideas of program and data as logically separate entities assumes a "Harvard architecture" distinction between program and data which does not exist in the more common von Neumann architecture, and is exemplified in programming languages such as Lisp. It should be no surprise that Lisp was used in some of the first genetic programming systems for exactly this reason - to take advantage of the duality of programs and data. A single string in memory could be executed as a program in one phase (similar to the protein creation system in biological systems) and in another phase operated upon as data (similar to the copying of DNA during cell replication). You further assume that what is passive or unused can never become active or vice versa). But we know that this does happen, both in real computer systems and in biology. In computer systems we see the problem of buffer overruns leading to data being executed. This is done intentionally by computer hackers. In biology we see stop codons being mutated or lost due to copy problems, with the result that reading continues into areas of the genome that were previously passive. The point is that your neat logical dichotomy between programs and data does not exist in the real world, either in computer science or in biology. Therefore the conclusions you draw from it are invalid.Nakashima
May 19, 2010
May
05
May
19
19
2010
08:36 AM
8
08
36
AM
PDT
What I find ironic is that the progress of science, which is supposed to back up evolutionary claims, is doing exactly the opposite: debunking evolution completely. I am talking about real science, not the storytelling, non-falsifiable, pseudoscience of the atheistic evolutionary community.aedgar
May 19, 2010
May
05
May
19
19
2010
08:06 AM
8
08
06
AM
PDT
Where in your process are the support mechanisms for dereferencing pointers, handling calls and their returns, and making comparisons in order to perform conditional logic? If your analogy is to be valid, you have to show that such a framework to perform processing is there. As far as highlighting the complexity of a chess program, the prorammer starts with the first line, and ends with the last. No program just appears on your hard disk, it evolves from that first line. If the programmer doesn't like a function, he re-writes it, until it survives his tests. He also doesn't have to write the whole program, as he can use libraries written for other programs that are suitable for use in his new chess program.Toronto
May 19, 2010
May
05
May
19
19
2010
07:57 AM
7
07
57
AM
PDT
1 4 5 6

Leave a Reply