Uncommon Descent Serving The Intelligent Design Community

The Relationship Between ID and Common Descent

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Since this has popped up a lot in the last few weeks, I wanted to repost an old post of mine describing the relationship between ID and Common Descent. I think it is pretty much as relevant now as when I originally posted it almost 6 years ago.

Many, many people seem to misunderstand the relationship between Intelligent Design and Common Descent. Some view ID as being equivalent to Progressive Creationism (sometimes called Old-Earth Creationism), others seeing it as being equivalent to Young-Earth Creationism. I have argued before that the core of ID is not about a specific theory of origins. In fact, many ID’ers hold a variety of views including Progressive Creationism and Young-Earth Creationism.

But another category that is often overlooked are those who hold to both ID and Common Descent, where the descent was purely naturalistic. This view is often considered inconsistent. My goal is to show how this is a consistent proposition.

I should start by noting that I do not myself hold to the Common Descent proposition. Nonetheless, I think that the relationship of ID to Common Descent has been misunderstood enough as to warrant some defense.

The issue is that most people understand common descent entirely from a Darwinian perspective. That is, they assume that the notion of natural selection and gradualism follow along closely to the notion of common descent. However, there is nothing that logically ties these together, especially if you allow for design.

In Darwinism, each feature is a selected accident. Therefore, Darwinian phylogenetic trees often use parsimony as a guide, meaning that it tries to construct a tree so that complex features don’t have to evolve more than once.

The ID version of common descent, however, doesn’t have to play by these rules. The ID version of common descent includes a concept known as frontloading – where the designer designed the original organism so that it would have sufficient information for its later evolution. If one allows for design, there is no reason to assume that the original organism must have been simple. It may in fact have been more complex than any existing organism. There are maximalist versions of this hypothesis, where the original organism had a superhuge genome, and minimalist versions of this hypothesis (such as from Mike Gene) where only the basic outlines of common patterns of pathways were present. Some have objected to the idea of a superhuge genome, on the basis that it isn’t biologically tenable. However, the amoeba has 100x the number of base pairs that a human has, so the carrying capacity of genetic information for a single-cell organism is quite large. I’m going to focus on views that tend towards the maximalist.

Therefore, because of this initial deposit, it makes sense that phylogenetic change would be sudden instead of gradual. If the genetic information already existed, or at least largely existed in the original organism, then time wouldn’t be the barrier for it to come about. It also means that multiple lineages could lead to the same result. There is no reason to think that there was one lineage that lead to tetrapods, for instance. If there were multiple lineages which all were carrying basically the same information, there is no reason why there weren’t multiple tetrapod lineages. It also explains why we find chimeras much more often than we find organs in transition. If the information was already in the genome, then the organ could come into existence all-at-once. It didn’t need to evolve, except to switch on.

Take the flagellum, for instance. Many people criticize Behe for thinking that the flagellum just popped into existence sometime in history, based on irreducible complexity. That is not the argument Behe is making. Behe’s point is that the flagellum, whenever it arose, didn’t arise through a Darwinian mechanism. Instead, it arose through a non-Darwinian mechanism. Perhaps all the components were there, waiting to be turned on. Perhaps there is a meta-language guided the piecing together of complex parts in the cell. There are numerous non-Darwinian evolutionary mechanisms which are possible, several of which have been experimentally demonstrated. [[NOTE – (I would define a mechanism as being non-Darwinian when the mechanism of mutation biases the mutational probability towards mutations which are potentially useful to the organism)]]

Behe’s actual view, as I understand it, actually pushes the origin of information back further. Behe believes that the information came from the original arrangement of matter in the Big Bang. Interestingly, that seems to comport well with the original conception of the Big Bang by LeMaitre, who described the universe’s original configuration as a “cosmic egg”. We think of eggs in terms of ontogeny – a child grows in a systematic fashion (guided by information) to become an adult. The IDists who hold to Common Descent often view the universe that way – it grew, through the original input of information, into an adult form. John A. Davison wrote a few papers on this possibility.

Thus the common ID claim of “sudden appearance” and “fully-formed features” are entirely consistent both with common descent (even fully materialistic) and non-common-descent versions of the theory, because the evolution is guided by information.

There are also interesting mixes of these theories, such as Scherer’s Basic Type Biology. Here, a limited form of common descent is taken, along with the idea that information is available to guide the further diversification of the basic type along specific lines (somewhat akin to Vavilov’s Law). Interestingly, there can also be a common descent interpretation of Basic Type Biology as well, but I’ll leave that alone for now.

Now, you might be saying that the ID form of common descent only involves the origin of life, and therefore has nothing to do with evolution. As I have argued before, abiogenesis actually has a lot to do with the implicit assumptions guiding evolutionary thought. And, as hopefully has been evident from this post, the mode of evolution from an information-rich starting point (ID) is quite different from that of an information-poor starting point (neo-Darwinism). And, if you take common descent to be true, I would argue that ID makes much better sense of what we see (the transitions seem to happen with some information about where they should go next).

Now, you might wonder why I disagree with the notion of common descent. There are several, but I’ll leave you with one I have been contemplating recently. I think that agency is a distinct form of causation from chance and law. That is, things can be done with intention and creativity which could not be done in complete absence of those two. In addition, I think that there are different forms of agency in operation throughout the spectrum of life (I am undecided about whether the lower forms of life such as plants and bacteria have anything which could be considered agency, but I think that, say, most land animals do). In any case, humans seem to engage in a kind of agency that is distinct from other creatures. Therefore, we are left with the question of the origin of such agency. While common descent in combination with ID can sufficiently answer the origin of information, I don’t think it can sufficiently answer the origin of the different kinds of agency.

(NOTE – original post and discussion here)

Comments
gpuccio:
I cannot accept frontloading because new complex functional information has been added throughout natural history.
And new codes.Mung
October 31, 2015
October
10
Oct
31
31
2015
11:37 AM
11
11
37
AM
PDT
johnnyb: Thank you for explaining better your ideas. I don't know if I misunderstand your goals. Maybe. However, the point remains that I am not interested in theories which are in no way supported by known facts, because empirical science is empirical science, and not mathematics, and the only purpose of theories is to explain observed facts. You insist that "there is no logical contradiction". OK, and so? A theory must explain observed facts, if possible without obvious logical contradictions. If a theory explains nothing of what we observe, the simple fact that it bears no logical contradictions is not enought to make it an interesting theory. However, I fully respect your thoughts, as I hope you may respect mine. :)gpuccio
October 31, 2015
October
10
Oct
31
31
2015
10:23 AM
10
10
23
AM
PDT
gpuccio - I think you misunderstand my goals and my point. First of I, I do not myself hold to common descent. My point, if you missed it, is that there is no logical contradiction for someone saying that (a) ID is true, (b) universal common descent is true, and (c) universal common descent happened entirely by a mechanical process, at least in the case of genetics. The channel capacity theorem lets us know that for any fixed length period of time, we can plan for and enact a coding mechanism which will safeguard that much information through time. Therefore, there is no logical contradiction with the idea of frontloading. I gave an example calculation to show how one might go about estimating the amount of information needed. Now, if someone held to this view, would that require them to think that the information is contained in the original organism? No. It gives an amount of information, but information can be transmitted across different media. Therefore, the information could be held in some other way, or it could even be external to the organism. However, in those cases, we would want to convert from base pairs to bits (basically multiplying the result by 2). Now, this interests me for several reasons. First of all, it is certainly a more reasonable and quantifiable theory of evolution than, say, Darwinism. But, even more interesting, is that it opens up a whole host of questions we might not otherwise think to ask otherwise. This is often the case with big theories - it isn't so much that the big theory is true in an absolute sense, but that once you understand the theoretical underpinnings, it helps you ask questions you might not otherwise think to ask. For example, one might ask, "can an original ancestor actually hold that much information?" Then, that can lead to experiments to determine just how much infomation an original ancestor can hold. It can lead to experiments to find out how compressible the genome is. These things then might lead us to determine how big of a genome the original ancestor could in fact have. If we decide that the number of required bits could not fit in the original genome, we might have an estimate of the total number of original genomes that are needed. Another consideration, which you mentioned, is "such information has been gradually segregated in the course of natural history". In fact, this has been repeatedly identified in the more well-established cases of evolution. Considering information in the way we just did helps us think more theoretically about such issues. I like questions that give me new ways to think about problems. And that's why I enjoy considering the possibility of mechanistic universal common descent. I also like exploring the opposite side of the spectrum with Louis Agassiz's forgotten classic "Essay on Classification". Great ideas don't hand you the truth on a platter. Great ideas give you new dimensions of consideration that you haven't thought of before. Taking them seriously and taking them to their limits teaches you both how it works, where it is useful, and where it isn't. That's what exercises like this are good for - they get you thinking in brand new ways. Zachriel -
That probably only represents a portion of the total species.
Yes, but my point was simply to get a number to work from. If you have better numbers feel free to plug them in!
On the other hand, many species are close analogues of other species, so there should be considerable data compression possible.
Exactly. It would be an interesting and worthwhile endeavor to try to analyze the compressibility of genomes, and the compressibility of genomes with respect to each other. In fact, the compressibility of two genomes with respect to each other might be a more theory-neutral method of measuring similarity of genomes.johnnyb
October 31, 2015
October
10
Oct
31
31
2015
09:54 AM
9
09
54
AM
PDT
After all, humans are just modified deuterostomes.
Modified by Intelligent Design. That is what a common design is all about- modifications on a similar theme.Virgil Cain
October 31, 2015
October
10
Oct
31
31
2015
07:42 AM
7
07
42
AM
PDT
johnnyb: There are about 1 million (10^6 )catalogued animal and plant species. That probably only represents a portion of the total species. Then you forgot to include trilobites and archaeopteryx. Altogether you are off by several orders of magnitude. On the other hand, many species are close analogues of other species, so there should be considerable data compression possible. After all, humans are just modified deuterostomes.Zachriel
October 31, 2015
October
10
Oct
31
31
2015
07:37 AM
7
07
37
AM
PDT
johnnyb: Just let me understand. You are suggesting that at OOL there were living beings with a genome of say 2.7 * 10^15 which contained the genomic information of all future species, and that such information has been gradually segregated in the course of natural history? Is that your point? Bizarre, indeed. Can you point to even one fact that would support such a theory?gpuccio
October 31, 2015
October
10
Oct
31
31
2015
07:16 AM
7
07
16
AM
PDT
Zachriel -
How much and what kind of information is that?
I am specifically talking about genetic information in this post. The most important result is not the specific number, but that it is of finite size. However, just to show an example of how one might calculate it, here is a (very) naive approach to the estimation for known animals and plants (i.e. excluding prokaryotes, fungi, etc.). There are about 1 million (10^6 )catalogued animal and plant species. An average genome is about 1 billion (10^9) base pairs. This gives us a total of 10^15 base pairs that need to be transmitted to the present day. Average animal life span - 5 years (this number just pulled out of the air). Number of generations to the Cambrian explosion ~ 10^8 Noise in the channel - 10 mutations per generation, giving a signal degredation of 10^-8, which is a fidelity of (1 - 10^-8) per generation Now, the basic channel capacity theorem is simply a relationship, like I made earlier, that it is a finite number (the main usage of it is the Shannon-Hartley theorem, which is focused on digital information over analog communication channels). In our case, the signal is surviving at a (1 - 10^-8) per generation, for 10^8 generations. That gives us a total fidelity of 0.37. Therefore, the original genome size for preserving 10^15 base pairs of information would be 2.7 * 10^15 base pairs. Again, this is a pretty naive approach. It would be increased based on how important different parts of the genome are to proper function, but also largely decreased (i.e. several orders of magnitude) if the original organism encoded any sort of compression (which is obviously possible giving both the amount of genetic overlap between creatures, and the number of repetitive sequences in any given organism). Other considerations we didn't include are the effect of having multiple copies (i.e., individuals) carrying it, and the effect of sexual recombination, and the inclusion of variability. However, hopefully it is easy to see that, as the channel capacity theorem states, there is *some* amount of redundant coding which will preserve your message intact across noisy channels.johnnyb
October 31, 2015
October
10
Oct
31
31
2015
06:37 AM
6
06
37
AM
PDT
Zachriel: "In science, practical problems often are theoretical problems." For once, I fully agree! :)gpuccio
October 30, 2015
October
10
Oct
30
30
2015
11:35 PM
11
11
35
PM
PDT
computerist: I am not sure I understand what you mean, could you please explain better?. What I mean is that the result of design happens in a definite time and space. That remains true, either if the designer acts continuously, or only at certain times. For example, eukaryotes appear at a certain time (even if we don't know exactly when). There is a window of time where they are not there, and a window of time where they are there. If the apperance is gradual (which is possible), there is a window of time in the middle where they appear. In all cases, the result of design can be observed in time. And space, obviously (it happens somewhere). Another aspect could be: is the designer in time and space? I am not considering that aspect, because we have at present no scientific clues to answer it. If aliens are the designer, they are probably in time and space. If a god is the designer, he could well be beyond time and space (according to how we conceive him), but then his actions have a result in time and space, which is my point. And I suppose there may be different answers. However, from a scientific point of view, and with the data available, I think we can only observe the results of biological design in time and space.gpuccio
October 30, 2015
October
10
Oct
30
30
2015
11:34 PM
11
11
34
PM
PDT
In time and space.
I would also add the possibility of "through time and space".computerist
October 30, 2015
October
10
Oct
30
30
2015
02:27 PM
2
02
27
PM
PDT
johnnyb: That is, the channel capacity theorem allows us to determine, for a given amount of transmitted data, and a given amount of noise in the transmission channel, and the number of transmissions, how much original information is needed in the original ancestor. How much and what kind of information is that? johnnyb: That amount may be a practical problem, but it is not a theoretical problem. In science, practical problems often are theoretical problems.Zachriel
October 30, 2015
October
10
Oct
30
30
2015
02:06 PM
2
02
06
PM
PDT
johnnyb: The two points you disagree with are essentially the same point: I cannot accept frontloading because new complex functional information has been added throughout natural history. That is not a logical point: it is an empirical observation. If you have followed my posts here in the recent years, you may know that I firmly believe that each new protein superfamily is an example of new complex functional information, completely beyond any possible non design explanation. Now, the point is: new protein superfamilies have been appearing throughout natural history, at all times. For example, I think that LUCA was probably the first living being, or pool of beings, on our planet. IOWs, I believe that life started on our planet more or less in a "prokaryotic" form. That is remarkable, because it means that OOL was a process, however sudden or gradual we can imagine it, that generated probably the greatest amount of new functional information in a relatively short time, at least at the level of protein sequences. Indeed, about half of basic protein superfamilies seem to have been already present in LUCA. So, in a sense, at OOL there was a "frontloading" of a huge quantity of information. But was that information sufficient to generate the following developments of life? Certainly not. Let's take just the following big step, for example: the appearance of eukaryotes. Now, let's avoid for the moment the problem of descent: whether eukaryotes derived in some way from prokaryotes, or were some creation from scratch. That is not important, for the moment. What is really important is the following: 1) Eukaryotes still use a lot of the protein information in prokaryotes. 2) Eukaryotes have a lot of new protein information, which did not exist in prokaryotes. Both those statements are unequivocally true, as far as what we can observe in proteomes is concerned. Now, how can anyone explain all the new proteins and protein functions in eukaryotes by some previous frontloading? I can see exactly nothing in the known facts which supports such a view. The appearance of eukaryotes is, therefore, as much an obvious act of new design intervention as OOL. And the same can be said for metazoa, and the Cambrian explosion. And these are just the most amazing examples. The simple truth is: the designer, whoever he is, acts repeatedly in natural history. Maybe continuously, maybe at definite times. But many times, many identifiable times. In time and space.gpuccio
October 30, 2015
October
10
Oct
30
30
2015
01:40 PM
1
01
40
PM
PDT
“Evolution, in the sense of common descent, is not a theory of similarity. Linnaeus, Cuvier, and Agassiz knew all about similarity, yet they denied common descent. Evolution is a theory of transformation.”,,, Paul Nelson - What Evolution Is, and What It's Not - October 30, 2015 http://www.evolutionnews.org/2015/10/what_evolution100501.htmlbornagain
October 30, 2015
October
10
Oct
30
30
2015
11:40 AM
11
11
40
AM
PDT
'Even a blind watchmaker is intelligent.' But being visually unintelligent, Mung, is a handicap - especially when he tries to make a watch.Axel
October 30, 2015
October
10
Oct
30
30
2015
10:05 AM
10
10
05
AM
PDT
9) However, my firm conviction is the following: none of that can be solved by religious or philosophical preconvictions, or antireligious preconvictions, of any kind. These are scientific problems, and they must be addressed empirically, with a correct scientific epistemology, even in the aspects (and there are many) which have deep philosophical, or religious, implications.
BeautifulUpright BiPed
October 30, 2015
October
10
Oct
30
30
2015
09:49 AM
9
09
49
AM
PDT
The analogy I give my students to help them understand is the Microsoft Windows installer. If I have a blank computer, I can install Microsoft Windows onto it. Now, Windows is not a single application. It is actually an entire ecosystem of applications. It has Notepad, Wordpad, Terminal, Control Panel, etc. However, the Windows Installer *is* a single application. It is an application that *contains* *all* of the data needed for all of the applications in the ecosystem. As the installer works, each individual application receives a subset of the information in the installer, based on the installer's knowledge of how that should be setup. The final applications are all there by design. None of them arose by natural selection. But they all owe their physical origin to the single installer program which put them all in their correct place, with the appropriate information. However, such a process only works *if* the installer had sufficient information to create the other programs. It does *not* work if it had to generate those programs by trial-and-error, in absence of sufficient information.johnnyb
October 30, 2015
October
10
Oct
30
30
2015
09:44 AM
9
09
44
AM
PDT
So, a little addendum. First of all my goal in this thread was not to give a specific view as being better than others, but help clarify what it would mean for someone to hold to a mechanistic view of common descent that was compatible with ID. And, as I've mentioned in the thread, the findings of Shannon/Yockey/Dembski show that, if one is willing to presuppose an information-rich origin of life, then there is no theoretical problem with genetic information having been transmitted through common descent, either intact or as an intelligent search algorithm. There is more to biology and evolution than the information problem (which is where my problems with common descent come in) and there are practical problems with the mechanistic common descent hypothesis, but as a theoretical construct, one can hold to a mechanistic common descent and ID. This is different from Darwinism because Darwinism presupposes an information-poor origin of life, and supposes that the intricacies of biology have all been built-up from essentially haphazard changes. An information-rich origin of life can allow for all of the intricacies of biology to be latently present until they are revealed, or for a sufficient search mechanism to find them when they are needed/wanted, or any combination of these.johnnyb
October 30, 2015
October
10
Oct
30
30
2015
09:35 AM
9
09
35
AM
PDT
Virgil - The Voles story is very interesting, and it goes back to a point I made on the 98% thread - genes are not necessarily what evolution needs to work. This post is entirely on the genetic information problem, but there are other aspects of evolution that must be considered as well. As I mentioned in the thread, it may be possible for humans and chimps to have 100% similar DNA, and still maintain their differences, simply because genes are not equivalent with organisms. Thus, radical changes in DNA may not be equivalent with differing organisms. So, even when solving the information problem for common descent, there are others as well, which get little mention (which is why I mentioned my own in the thread).johnnyb
October 30, 2015
October
10
Oct
30
30
2015
09:28 AM
9
09
28
AM
PDT
gpuccio - Lots of good thoughts. However, I want to contend one major point:
Therefore, I absolutely refute, as good explanations, both frontloading (in the sense of an intelligent agent acting only at the beginning of life) and theistic evolution (in the sense of an intelligent agent acting only at the beginning of the universe). None of them can explain even one single complex protein, not any more than neo darwinism can.
I disagree here, for the reasons that I gave to bfast. If the origin of the life or the universe was information-rich rather than information-poor, then there is no problematic issue. The reason neo-Darwinism has problems is that its view of the origin-of-life is information-poor, and it has no mechanism to re-insert the information anywhere. Frontloading, on the other hand, simply inserts the information in the beginning. Now, this also ties into your first point, which I also have issue with:
It is, IMO, absolutely obvious that new complex functional information has been added to living beings throughout natural history. Each new protein superfamily, each new body plan, and so on, is the result of a huge addition of new complex functional information, and only conscious, intelligent, purposeful agents can do that.
I don't think that this is as obvious as you may think. It is perhaps more obvious that there has been a differentiation of complex functional information, but I challenge you to show me how to differentiate between the possibility that information was (a) newly added or (b) newly expressed. If these situations look identical from the standpoint of the present, then I see no reason why one *must* prefer (a) over (b). And, if one chooses (b), then there is no logical reason why the information could not have been there from the beginning. Now, I agree with you that full frontloading is probably not the case, and there are good arguments against it. However, as far as information goes, there is nothing that logically precludes it (though there may be things that practically preclude it). As for your other points, I largely agree, especially in this one:
For me the only difference is that ID is absolutely supported by all known facts, while CD is supported by many facts, but there are some which are not well explained, and could even be against the theory. Let’s remember also that CD needs not be universal. After all, some designs could be from scratch.
johnnyb
October 30, 2015
October
10
Oct
30
30
2015
09:22 AM
9
09
22
AM
PDT
bfast -
I don’t find front-loading to be any help in this transformation. Neo-Darwinism is hooped, as far as “Edge of Evolution” theory would say. The “common design” hypothesis could obviously produce this effect. However, the “active agent” model, the view that an active agent has frequently twiddled with DNA, is the best explanation that I can find that does not destroy “common descent”.
I think that front-loading does it quite well. Yockey once, a long time ago, noted that as long as there has been a fixed length of time from the origin of life, that the information problem goes away if one is willing to posit an information-rich origin of life. That is, the channel capacity theorem allows us to determine, for a given amount of transmitted data, and a given amount of noise in the transmission channel, and the number of transmissions, how much original information is needed in the original ancestor. That amount may be a practical problem, but it is not a theoretical problem. Specifically to your question, there could have been two genes which were identical except for this one change, and one copy was lost on each side. Or, more likely, there could have been a mechanism that creates the change. That is, there is some genetic functor that applies the mutation that you are looking for, or at least did at some point in evolutionary history. Provided that you are willing to concede that the amount of information in the past is greater than the amount of information in the present, this is very possible. Now, that isn't to say that this is a better model than the mutating actor theory, but it certainly within the realm of conceptual plausibility, and Yocki/Shannon/Dembski have shown us how to apply it (Dembski's "Searching Large Spaces" contains what he calls the "No Free Lunch Regress", which basically applies Yockey's ideas to evolutionary search instead of static transmission).johnnyb
October 30, 2015
October
10
Oct
30
30
2015
09:13 AM
9
09
13
AM
PDT
Just curious, what is the scientific support for Common Descent?Virgil Cain
October 30, 2015
October
10
Oct
30
30
2015
07:08 AM
7
07
08
AM
PDT
bFast: I point to my favorite gene, the HAR1F... In the human there is a fundamental shift in the 3d configuration of the RNA. This 3D shift may be accomplishable with as few as 6 mutations, but not less. Just curious. What is the scientific support for this statement?Zachriel
October 30, 2015
October
10
Oct
30
30
2015
06:47 AM
6
06
47
AM
PDT
Common Descent? - Some Insurmountable Problems https://docs.google.com/document/d/1BBU4GVEPIxDDSre6YLqU5zbaXVdSk4RRMD8F7GU3DPM/editbornagain
October 30, 2015
October
10
Oct
30
30
2015
06:43 AM
6
06
43
AM
PDT
What I personally find a bit confusing is using the word 'evolution' in a context that assumes guidance. By definition evolution is unguided. As soon as there is guidance, it is consequently not evolution. Also, the mentioned idea of a 'cosmic egg' seems weird to me. It follows that there must have been a cosmic hen or a cosmic turtle ;) bFast, "then science is just silly." This is it! There is no other sensible answer to how it all came to be, except through God. In God all infinite regress disappears while He Himself always stays out of the 'equation' of science. He sustains the universe and miraculously intervenes but His providence is out of the scope of science. In my understanding, the wise science as opposed to the silly science knows its own limits. To certain questions, there cannot be scientific answers. I know it may seem like posting artificial bounds on science. But these limits do exist. It may not be satisfying to a scientist's mind but humility is greater than science :) I think scientifically we can go as far as asserting eminent failures of naturalism in its attempts to provide an explanation of the origins of the universe and of life. ID is helpful in this regard. But that's about it really.EugeneS
October 30, 2015
October
10
Oct
30
30
2015
05:51 AM
5
05
51
AM
PDT
In highlighting the rich diversity of intelligent design theories, I want to focus attention once more on the idea that the DNA system is like a little universe in it's own right. In a similar way that a 3D computersimulation, or human imagination are worlds in their own right. Such a 3D DNA world can then copy objects from the universe proper, and in this DNA world, complete designs of an adult organism can be chosen as a whole. The 3D DNA adult then guides development of the physical organism to adulthood. Non-coding DNA? It could just as well be representations of the moon and earth cycles in the DNA world. There can just as well be a representation of a snake in human DNA, like in the garden of Eden, in order to recognize snakes as possibly lethal creatures. That is a common sense intelligent design theory which has got a lot going for it, that I don't hear enough about. It's been established by Peter Rowlands and Vanessa Hill that the mathematical ordering of the DNA system is the same as that of the physical universe, and the DNA world idea is speculation based on that fact. (there are 4 parameters mass, time, space and charge in physics, there are 4 bases CATG in biology. There are 64 elements of dirac algebra in physics, and there are 64 triplet codons in biology. etc. etc. etc. the mathematical ordering is the same)mohammadnursyamsu
October 29, 2015
October
10
Oct
29
29
2015
05:47 PM
5
05
47
PM
PDT
2) Therefore, I absolutely refute, as good explanations, both frontloading (in the sense of an intelligent agent acting only at the beginning of life) and theistic evolution (in the sense of an intelligent agent acting only at the beginning of the universe). None of them can explain even one single complex protein, not any more than neo darwinism can.
This is one of reasons I have come up with the "self-repairing/constructive universe hypothesis" (SCUH). Here is the general breakdown:
I propose the self-constructive/repairing universe hypothesis (SCUH). SCUH basically says that the universe is constantly in the process of being created/repaired (creation never ceased) in order to achieve a definitive state of goodness. Laws of the universe as well as entities within this universe (including ofcourse biological intelligent entities) are helpers or extensions of the primary designer (who is acting externally and/or internally to this process) in achieving this eventual state of goodness. The evidence for this is overwhelming IMHO. We find that the universe out of all the chaos/disorder/evil has a tendency to achieve a state of order/organization/goodness. This organization bias principle is the driver of SCUH. Progression will not be entirely smooth and/or deterministic, but inevitable given the primary intent.
I believe this hypothesis is completely inline with ID as well as the evidence.computerist
October 29, 2015
October
10
Oct
29
29
2015
04:44 PM
4
04
44
PM
PDT
mapou, "... convergence and gene transfers via viral infections" Convergence is not HGT. Viral transmission of genes is a purported mechanism. I think there are other identified mechanisms. However there may be unidentified mechanisms. In the bacteria world HGT is apparently an abundant phenomenon. As such it is unlikely that there isn't some sort of mechanism floating around.bFast
October 29, 2015
October
10
Oct
29
29
2015
04:07 PM
4
04
07
PM
PDT
bFast, The only mechanisms for HGT I know about are convergence and gene transfers via viral infections, both of which are astronomically unlikely. That is to say, the combinatorial explosion excludes both mechanisms. Even worse, neither hypothesis can be falsified. One must accept them on faith alone. Pseudoscience at its best.Mapou
October 29, 2015
October
10
Oct
29
29
2015
01:59 PM
1
01
59
PM
PDT
Virgil Cain, Very interesting info about voles. I hadn't been particularly aware of voles until the last few years. 'Seems that my "mouse problems" were actually vole problems. In all of the interesting findings, the one I find most interesting is, "Nevertheless, voles are perfectly adept at recognizing those of their own species." This is certainly not universal of creatures, as I had a peacock mistake me for a peahen once. (Peacocks in heat are unbelievable desperadoes that try to mate with anything that moves.)bFast
October 29, 2015
October
10
Oct
29
29
2015
01:40 PM
1
01
40
PM
PDT
Mapau, we are getting down to definitions here. By your definition, common descent cannot support HGT. Mapau, "It is obvious that HGT falsifies ... Darwinism." I don't quite get that. If a natural mechanism is found that allows genes to be transferred, and I think a few have at least been proposed, then Darwinism is not toast. If you are to contend that Darwinism requires UCD, and that UCD holds to your definition, then I guess Darwinism is toast. However, this data alone does not defeat naturalism, RM+NS.bFast
October 29, 2015
October
10
Oct
29
29
2015
01:37 PM
1
01
37
PM
PDT
1 2

Leave a Reply