Uncommon Descent Serving The Intelligent Design Community

Fixing a Confusion

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

I have often noticed something of a confusion on one of the major points of the Intelligent Design movement – whether or not the design inference is primarily based on the failure of Darwinism and/or mechanism.

This is expressed in a recent thread by a commenter saying, “The arguments for this view [Intelligent Design] are largely based on the improbability of other mechanisms (e.g. evolution) producing the world we observe.” I’m not going to name the commenter because this is a common confusion that a lot of people have.

The reason for this is largely historical. It used to be that the arguments for design were very plain. Biology proceeded according to a holistic plan both in the organism and the environment. This plan indicated a clear teleology – that the organism did things that were *for* something. These organisms exhibited a unity of being. This is evidence of design. It has no reference to probabilities or improbabilities of any mechanism. It is just evidence on its own.

Then, in the 19th century, Darwin suggested that there was another possibility for the reason for this cohesion – natural selection. Unity of plan and teleological design, according to Darwin, could also happen due to selection.

Thus, the original argument is:

X, Y, and Z indicate design

Darwin’s argument is:

X, Y, and Z could also indicate natural selection

So, therefore, we simply show that Darwin is wrong in this assertion. If Darwin is wrong, then the original evidence for design (which was not based on any probability) goes back to being evidence for design. The only reason for probabilities in the modern design argument is because Darwinites have said, “you can get that without design”, so we modeled NotDesign as well, to show that it can’t be done that way.

So, the *only* reason we are talking about probabilities is to answer an objection. The original evidence *remains* the primary evidence that it was based on. Answering the objection simply removes the objection.

As a case in point, CSI is based on the fact that designed things have a holistic unity. Thus, they follow a specification that is simpler than their overall arrangement. CSI is the quest to quantify this point. It does involve a chance rejection region as well, but the main point is that the design must operate on principles simpler than their realization (which provides the reduced Kolmogorov complexity for the specificational complexity).

Comments
BO'H: if your mountain is a pyramid in Egypt, function is readily identified and the FSCO/I involved points to design. We have many times given how the implicit information in functionally organised things can be quantified, i.e. use a structured sequence of y/n q's in a relevant, efficient description language and the chain length to specify will give a measure of involved information, indeed this was pointed out in 1973 by Orgel. However, dFSCI is already coded as a discrete state digit string, which can readily be converted into bits; there is utterly no need to redefine it, we already have broader metrics that actually work by identifying a reasonable equivalent string in some description language -- cf. AutoCAD. But also, functionally specific complex organisation and information is not a universal detector of codes, function, meaning etc. It addresses a significant but limited class of phenomena and allows us to draw momentous conclusions on empirical warrant. That is more than enough. KFkairosfocus
December 5, 2016
December
12
Dec
5
05
2016
08:22 AM
8
08
22
AM
PDT
Bob O'H
Both of you agree that the observer chooses the function. But that makes it subjective, so it should be easy to get almost any object (designed or not) above 500 bits. Just define the function tightly enough. Do you have any way of restricting the specification to avoid that?
Just thinking out loud here, but it's not a question of getting any function to 500 bits (yes, you could create highly constrained functions), but in getting some functions under 500 bits. That's why I initially stated that some functions are too ambiguous - meaning, too subjective - to really analyze as such. Other biological functions are commonly referenced and understood as functions. Take the evolutionary origin of protein folds. Could we bring that function down under 500 bits of information somehow? That's actually what researchers would want to do, break it down to the smallest possible functional-segment. But even then, the information content would exceed the design-boundary.
(on the analogue/digital divide, I think this is easy to solve: simply define dFSI as a measure (basically make it a probability mathematically), so measure theory bridges the gap between continuous and discrete spaces)
Not sure on how that works but it seems like a good translation in theory. Information bits would be derived from probabilities on how they would occupy space ... or something?Silver Asiatic
December 5, 2016
December
12
Dec
5
05
2016
07:47 AM
7
07
47
AM
PDT
gpuccio @ 40 & then Silver Asiatic @ 41 -
According to my definition of functional complexity, any observer can define any function he likes for any object, analogic or digital. Then we can compute the complexity of the defined function. The point is, if we can define a function, any function, for an object, which requires at least 500 bits of information to be implemented, then we can infer design.
The function of the mountain has to be defined by the observer. If you chose a mountain as a place to build a home, for example, that’s the function of the mountain. Then you’d determine characteristics ideal for your home, and then investigate various mountains to see if they met your needs for function.
Both of you agree that the observer chooses the function. But that makes it subjective, so it should be easy to get almost any object (designed or not) above 500 bits. Just define the function tightly enough. Do you have any way of restricting the specification to avoid that? (on the analogue/digital divide, I think this is easy to solve: simply define dFSI as a measure (basically make it a probability mathematically), so measure theory bridges the gap between continuous and discrete spaces)Bob O'H
December 5, 2016
December
12
Dec
5
05
2016
07:15 AM
7
07
15
AM
PDT
GP That is a good explanation, thanks. Answering Bob O'H's question:
I am not sure how you specify functionality in a mountain (for example).
The function of the mountain has to be defined by the observer. If you chose a mountain as a place to build a home, for example, that's the function of the mountain. Then you'd determine characteristics ideal for your home, and then investigate various mountains to see if they met your needs for function. The total number of mountains searchable is the search space. The number you've tested in the sample that successfully match is the target space.Silver Asiatic
December 5, 2016
December
12
Dec
5
05
2016
05:37 AM
5
05
37
AM
PDT
Silver Asiatic: According to my definition of functional complexity, any observer can define any function he likes for any object, analogic or digital. Then we can compute the complexity of the defined function. The point is, if we can define a function, any function, for an object, which requires at least 500 bits of information to be implemented, then we can infer design. The same object can be used for different functions. Here: https://uncommondescent.com/intelligent-design/functional-information-defined/ I have tried to clarify in some detail how functional information can be defined and measured. Of course, digital functional information (dFSI) is much easier to measure. Again, biological information is mainly digital. So, digital information is what we are interested in when we debate design in biology. But the concept remains the same for analogic information, as KF has often pointed out. You can always convert analogic information to digital form. People like Bob O'H try to deny a simple truth which cannot be denied: some configurations in material objects bear functional information, and when that functional information is above some threshold (500 bits is a very safe threshold) the objects are always designed objects. Bib O'H has not even tried to explain why he would infer design, or not infer it, in front of some binary digital sequence that specifies the first 125 decimal digits of pi. 500 bits of functional information. The answer is simple: no laws of necessity is known, or even imagined, that can generate such a sequence, in a non designed system. And the probability of getting it by chance is in the range of the UPB: you will never find that sequence in a random material system in the universe. The same is true for any sequence of at least 500 bits of functional information: you will never find those sequences in random systems, and if there is no known law of necessity that can reasonably be linked to that kind of sequence, then you can be sure that it was designed by some conscious designer who understood the meaning of that sequence, the reason why that sequence is different from some generic random sequence of that length. So, you will not find one of Shakespeare's sonnets in a sequence of grains of sand, and you will not find the source code for Excel in meteorologic phenomena. And so on. When randomly typing monkeys will generate Shakespeare's works, then Bob O'H will be vindicated. Until then, he is only an obstinate denier of the evidence.gpuccio
December 5, 2016
December
12
Dec
5
05
2016
05:24 AM
5
05
24
AM
PDT
Bob O'H
I am not sure how you specify functionality in a mountain (for example).
Where functionality is ambiguous or indefinable, then research cannot proceed in that area. That's the way science works. Where there is a high degree of observable, specified function, then measurements can be successfully carried out there and testing done. Such is the case for digital code - and that's why it's a good example to test for ID.Silver Asiatic
December 5, 2016
December
12
Dec
5
05
2016
04:59 AM
4
04
59
AM
PDT
Bob O'H: Again, you are a true disappointment. First of all, I purposefully deal with digital functional complexity, for two simple reasons: 1) It's much easier to measure functional information 2) the information in the genome is digital Of course, it is perfectly possible to measure functional complexity in analogic objects, but it is more difficult, and there is no reason to discuss analogic objects when all the information we are interested in is digital. But I will not spend any more time with your arrogant and completely senseless position. Good by.gpuccio
December 5, 2016
December
12
Dec
5
05
2016
04:50 AM
4
04
50
AM
PDT
BO'H: Your talking points are clanging; GP has quite rightly spoken to literally trillions of successful tests (try a whole Internet full) and I have provided examples of just how far short efforts to get blind chance to work have fallen relative to the 500 bit target. There is a known, observable phenomenon. It critically depends on functional specificity and a threshold beyond which blind search is not a plausible source. After huge effort, cases of functional specificity of 10^100 factor short of the threshold have been seen. Empirically, intelligence routinely creates cases of dFSCI, including your own objections. The empirical observation and the search challenge line up. We have excellent reason to trust the reliability of dFSCI (and more broadly FSCO/I and even more broadly CSI) as a reliable sign of design. And yes, that points to DNA -- TEXT in the heart of the living cell -- as designed and also to body plans from microbes to man as designed. I am inclined to believe this is the real problem, where this points. KFkairosfocus
December 5, 2016
December
12
Dec
5
05
2016
04:18 AM
4
04
18
AM
PDT
Origenes @ 33 - your evidence? Have you done what I suggested and formally compared designed and non-designed objects? gpuccio @ 34 - yes, we're getting nowhere. please don't try to read my mind - you're not very good at it. One reason why you should be the one to test your ideas is that you will then find out if there are any flaws or problems with it. e.g. I am not sure how you specify functionality in a mountain (for example). I'm not going to play your game of throwing digits at you, because I don't see why I should spend my time playing this game when you're not even willing to spend the time. If you are not prepared to test your method, then why should I? I see a lot of manuscripts where people suggest new methods for various things, and I've never seen anyone argue that it's not up to them to show their method works. Why should you be any different? It's not enough to have faith in your method - you have to actually demonstrate it.Bob O'H
December 5, 2016
December
12
Dec
5
05
2016
04:03 AM
4
04
03
AM
PDT
BO'H: Pardon, but you have clearly spoken amiss. We have identified a key form of functionally specific, complex organisation and associated information, digitally coded text strings beyond a threshold, 500 or 1,000 bits makes but little practical difference. These have been discussed in and around UD for years under the acronym dFSCI. Both criteria are important,
a: informational functional specificity [here, a message or algorithmic instructions and/or data etc] AND b: complexity beyond a threshold where the blind search resources of the observed solar system or cosmos (as appropriate) cannot plausibly search out a sufficient proportion of the configuration space to have any reasonable chance of finding isolated islands of function.
Random text examples of searching out meaningful strings have been shown, and we readily see that they fall short of the complexity criterion by about a factor of 10^100 in terms of scope of config space. In typing and posting objections in the form of text in English, you yourself are exemplifying how, routinely, such messages of more than 72 - 143 ASCII characters are produced by intelligently directed configuration. There are literally trillions of cases in point. Reliably, once we are beyond the reasonable thresholds, dFSCI is the product of intelligence, and the search challenge analysis shows why. Indeed, imagined golden searches face the problem that a search for a search selects from the set of sub sets of a set. So, if the direct blind search searches a space of order n possibilities, the search for a golden search -- equally blindly -- must search in a config space of scope 2^n. Exponentially harder, especially when n is already ~10^150 or ~10^301. So, we have an empirically based, analytically backed criterion for dFSCI that makes it a very reliable sign of design as cause. Indeed, a morally certain one. I am sure that on the relevant thought exercise, astronauts encountering a wall with text giving the first 125 digits of pi in binary code, will instantly infer to design. For excellent cause. For that matter, if they encounter a wall, that would be sufficient functionally specific complex organisation and implicitly associated information to similarly infer design. For that matter, simply encountering a polished cuboidal stone monolith with precise faces, angles and proportions that reflect relevant ratios such as the golden rectangle would meet the criterion also. Finding text or an illustration inscribed or carved into it would actually only be a capstone, the issue would be, can we decode it. And of course, I here have in mind the Rosetta stone or the Behistan rock. The mysterious Voynich manuscript is certainly designed, the question is is its text nonsense or is it a crack-able code. All of these reflect, the underlying acceptance that at the relevant time and place, there could be designers, intelligent agents capable of imposing a purposeful configuration that fulfills some meaningful, configuration-dependent function. Which is where I think an underlying issue is: if one implicitly assumes no possible designer could be there, then one will insist on inferring to any other conceivable possibilities. Now, including quasi-infinite multiverses and/or actually infinite past time. Of course, the onward problem is, that there is a class of cases of unknown provenance that manifest copious dFSCI. namely, cell based life. Text lies in the heart of the cell and at the heart of getting its workhorse molecules to do their jobs, proteins and enzymes in particular. On this subject, the great astronomer, Sir Fred Hoyle, noted during his c 1981 Caltech talk:
The big problem in biology, as I see it, is to understand the origin of the information carried by the explicit structures of biomolecules. The issue isn't so much the rather crude fact that a protein consists of a chain of amino acids linked together in a certain way, but that the explicit ordering of the amino acids endows the chain with remarkable properties, which other orderings wouldn't give. The case of the enzymes is well known . . . If amino acids were linked at random, there would be a vast number of arrange-ments that would be useless in serving the pur-poses of a living cell. When you consider that a typical enzyme has a chain of perhaps 200 links and that there are 20 possibilities for each link,it's easy to see that the number of useless arrangements is enormous, more than the number of atoms in all the galaxies visible in the largest telescopes. This is for one enzyme, and there are upwards of 2000 of them, mainly serving very different purposes. So how did the situation get to where we find it to be? This is, as I see it, the biological problem - the information problem . . . . I was constantly plagued by the thought that the number of ways in which even a single enzyme could be wrongly constructed was greater than the number of all the atoms in the universe. So try as I would, I couldn't convince myself that even the whole universe would be sufficient to find life by random processes - by what are called the blind forces of nature . . . . By far the simplest way to arrive at the correct sequences of amino acids in the enzymes would be by thought, not by random processes . . . . Now imagine yourself as a superintellect working through possibilities in polymer chemistry. Would you not be astonished that polymers based on the carbon atom turned out in your calculations to have the remarkable properties of the enzymes and other biomolecules? Would you not be bowled over in surprise to find that a living cell was a feasible construct? Would you not say to yourself, in whatever language supercalculating intellects use: Some supercalculating intellect must have designed the properties of the carbon atom, otherwise the chance of my finding such an atom through the blind forces of nature would be utterly minuscule. Of course you would, and if you were a sensible superintellect you would conclude that the carbon atom is a fix.
That is the challenge you and other objectors to the joint design inferences on FSCO/I (especially dFSCI) and cosmological fine tuning face, for the very balance of atoms in our observed cosmos is suspiciously set up to emphasise key ingredients of cell based life, with two components that exhibit astonishing and effectively unique properties: Carbon with organic, chaining based chemistry, and water, H2O with its astonishing functionalities. I remind, that is three of the four most abundant elements in the cosmos, the C and O being linked to the first fine tuning result identified by Hoyle and Fowler in 1953. The very same year in which Crick and Watson identified DNA and the former concluded that this was a text-bearing molecule that in effect was the chemical basis for the gene. (Quite a year that, was it a good wine year?) Mix in N, which is nearby in abundance, and we are looking at proteins already. Then, you need to explain C-chemistry, aqueous medium, molecular nanotech, text using, terrestrial planet, cell based life. I have, for cause, long since concluded Sir Fred was right: [b]y far the simplest way to arrive at the correct sequences of amino acids in the enzymes would be by thought, not by random processes. Or, more precisely, by blind chance and/or mechanical necessity on the gamut of sol system or actually observed cosmos. KFkairosfocus
December 4, 2016
December
12
Dec
4
04
2016
09:29 PM
9
09
29
PM
PDT
Bob O'H: You disappoint me. You know quite well, like anybody else here, that no non designed object exhibits 500 bits of functional information. You know perfectly well that you cannot offer one single counter-example, that nobody can. At least, some of your fellow ID criticists, at TSZ, have tried, in the past, without succeeding. I still remember the sincere, but desperate attempts of some of them. They had courage, but they could not succeed in an impossible task. Are you an intellectual coward? You insist in saying things like: "It’s clear that nobody has done what gpuccio and I agree is an obvious thing to do to present a positive case for a design inference." when: 1) You have not offered a single example of non designed object that can exhibit some modest level of dFSCI, least of all 500 bits. And yet you have the whole universe at your disposal, from planets to grains of sand to randomly generated strings. And you are absolutely free to define any possible function for any possible object. 2) I have offered millions of obvious examples of designed objects which do exhibit dFSCI in tons, well beyond the threshold of 500 bits, this post being one more of them. 3) You have not even answered my mental experiment with your simple opinion: would you infer design or not? And why? But you boldly state: "I think we’re getting nowhere." You disappoint me. It is an absolute truth that you can give me any number of digital sequences, and I will always be able to infer design correctly with no false positives and, obviously, many possible false negatives, using the simple concept of dFSCI. In scientific terms, that means that dFSCI can infer design correctly with 100% specificity. I challenge you, and anyone else, to falsify this statement. Good luck.gpuccio
December 4, 2016
December
12
Dec
4
04
2016
05:12 PM
5
05
12
PM
PDT
Bob O'H: ... it’s one thing to say that all designed objects have some property, but that’s useless as a criterion for design if non-designed objects have the same property.
The good news is that this is not the case. Non-designed objects do not have the same property.
GPuccio: I cannot think of any category of non designed objects which has any relevant value of dFSCI. As far as I can see, any non designed objects that can be read as some digital sequence cannot be used to implement any non trivial function.
Origenes
December 4, 2016
December
12
Dec
4
04
2016
02:24 PM
2
02
24
PM
PDT
I think we're getting nowhere. It's clear that nobody has done what gpuccio and I agree is an obvious thing to do to present a positive case for a design inference. kf - it's one thing to say that all designed objects have some property, but that's useless as a criterion for design if non-designed objects have the same property. Thus you need to show that the property doesn't apply to non-designed objects if it is to be used as a criterion for design.Bob O'H
December 4, 2016
December
12
Dec
4
04
2016
01:55 PM
1
01
55
PM
PDT
PS: Random document generation cases, from Wiki:
One computer program run by Dan Oliver of Scottsdale, Arizona, according to an article in The New Yorker, came up with a result on August 4, 2004: After the group had worked for 42,162,500,000 billion billion monkey-years, one of the "monkeys" typed, "VALENTINE. Cease toIdor:eFLP0FRjWK78aXzVOwm)-‘;8.t" The first 19 letters of this sequence can be found in "The Two Gentlemen of Verona". Other teams have reproduced 18 characters from "Timon of Athens", 17 from "Troilus and Cressida", and 16 from "Richard II".[24] A website entitled The Monkey Shakespeare Simulator, launched on July 1, 2003, contained a Java applet that simulated a large population of monkeys typing randomly, with the stated intention of seeing how long it takes the virtual monkeys to produce a complete Shakespearean play from beginning to end. For example, it produced this partial line from Henry IV, Part 2, reporting that it took "2,737,850 million billion billion billion monkey-years" to reach 24 matching characters: RUMOUR. Open your ears; 9r"5j5&?OWTY Z0d...
A mere factor of 10^100 or so short of the scope of config spaces that mark the 500 bit threshold.kairosfocus
December 4, 2016
December
12
Dec
4
04
2016
01:29 PM
1
01
29
PM
PDT
BO'H: Note, GP is identifying digitally coded s-t-r-i-n-g data structures [so, inherently contingent] bearing complex, configuration based specific messages of scope at least 500 bits. There are trillions of observed case of observed cause; in every one of them, the separately known cause is design. Intelligently directed configuration. This is backed up by a search challenge analysis on available atomic resources and time. So, we have a scientifically observed empirical pattern of high reliability backed up by an analysis that makes good sense of what we see. This warrants the confident albeit inherently provisional inference (as obtains for scientific laws in general) that dFSCI in particular is a reliable sign of design. The thought exercise case would instantly lead a reasonable observer to the conclusion, design. The same would obtain if instead we found hardened petrified mud that captured a mould of the string through a flood or the like. The mould would be a natural aspect, the text string evidently the product of art. Your attempt to demand of GP that he provide a counter-example is thus misdirected. I suggest, that you review your argument so far. KFkairosfocus
December 4, 2016
December
12
Dec
4
04
2016
10:45 AM
10
10
45
AM
PDT
Bob O'H: "how many non-designed objects have you tried to calculate dFSCI for?" I cannot think of any category of non designed objects which has any relevant value of dFSCI. As far as I can see, any non designed objects that can be read as some digital sequence cannot be used to implement any non trivial function. Again, if you believe differently, explain why. "Frankly, it’s a distraction because it’s contrived, and still doesn’t get to the issue" What do you mean? It's a mental experiment, an important category in scientific reasoning. I would really appreciate your thoughts. And it does get to the issue. Because, if you can believe that such an object could be non designed, that the best explanation for such an object is some random non designed event, then this is exactly the kind of object that you should show me as a counter example to falsify my reasoning. Therefore, I reiterate my invitation: just comment on that example, or simply show me a single counterexample of that kind: a single object, or series of data, anything, that you can demonstrate originated in a system without any intervention of a designer, and that can be read as the sequence of the first 125 decimal digits of pi, or can be used to implement any other function, as defined by you, for which 500 specific bits of information are needed. It's simple enough. Will you join the mental experiment?gpuccio
December 4, 2016
December
12
Dec
4
04
2016
09:13 AM
9
09
13
AM
PDT
still doesn’t get to the issue
It is the issue Bob, and you punted.Upright BiPed
December 4, 2016
December
12
Dec
4
04
2016
08:20 AM
8
08
20
AM
PDT
Bob O'H, I have a question for you. Since the default assumption in science was that life was designed until natural selection came along and could supposedly finally explain that apparent Design without a Designer, and since advances in the mathematics of population genetics have now shown that natural selection is grossly inadequate as that Designer substitute, then why, with the falsification of natural selection as the Designer substitute, was design not then reinstituted as the default assumption in science instead of the adoption of neutral theory and various other pure chance theories? A few notes:
“Yet the living results of natural selection overwhelmingly impress us with the appearance of design as if by a master watchmaker, impress us with the illusion of design and planning.” Richard Dawkins – “The Blind Watchmaker” – 1986 – page 21 quoted from this video – Michael Behe – Life Reeks Of Design – 2010 – video https://www.youtube.com/watch?v=Hdh-YcNYThY “The Third Way” – James Shapiro, Denis Noble, and etc.. etc..,,, Excerpt: “some Neo-Darwinists have elevated Natural Selection into a unique creative force that solves all the difficult evolutionary problems without a real empirical basis.” http://www.thethirdwayofevolution.com/ The waiting time problem in a model hominin population – 2015 Sep 17 John Sanford, Wesley Brewer, Franzine Smith, and John Baumgardner Excerpt: The program Mendel’s Accountant realistically simulates the mutation/selection process,,, Given optimal settings, what is the longest nucleotide string that can arise within a reasonable waiting time within a hominin population of 10,000? Arguably, the waiting time for the fixation of a “string-of-one” is by itself problematic (Table 2). Waiting a minimum of 1.5 million years (realistically, much longer), for a single point mutation is not timely adaptation in the face of any type of pressing evolutionary challenge. This is especially problematic when we consider that it is estimated that it only took six million years for the chimp and human genomes to diverge by over 5 % [1]. This represents at least 75 million nucleotide changes in the human lineage, many of which must encode new information. While fixing one point mutation is problematic, our simulations show that the fixation of two co-dependent mutations is extremely problematic – requiring at least 84 million years (Table 2). This is ten-fold longer than the estimated time required for ape-to-man evolution. In this light, we suggest that a string of two specific mutations is a reasonable upper limit, in terms of the longest string length that is likely to evolve within a hominin population (at least in a way that is either timely or meaningful). Certainly the creation and fixation of a string of three (requiring at least 380 million years) would be extremely untimely (and trivial in effect), in terms of the evolution of modern man. It is widely thought that a larger population size can eliminate the waiting time problem. If that were true, then the waiting time problem would only be meaningful within small populations. While our simulations show that larger populations do help reduce waiting time, we see that the benefit of larger population size produces rapidly diminishing returns (Table 4 and Fig. 4). When we increase the hominin population from 10,000 to 1 million (our current upper limit for these types of experiments), the waiting time for creating a string of five is only reduced from two billion to 482 million years. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4573302/ Haldane’s Dilemma Excerpt: Haldane, (in a seminal paper in 1957—the ‘cost of substitution’), was the first to recognize there was a cost to selection which limited what it realistically could be expected to do. He did not fully realize that his thinking would create major problems for evolutionary theory. He calculated that in man it would take 6 million years to fix just 1,000 mutations (assuming 20 years per generation).,,, Man and chimp differ by at least 150 million nucleotides representing at least 40 million hypothetical mutations (Britten, 2002). So if man evolved from a chimp-like creature, then during that process there were at least 20 million mutations fixed within the human lineage (40 million divided by 2), yet natural selection could only have selected for 1,000 of those. All the rest would have had to been fixed by random drift – creating millions of nearly-neutral deleterious mutations. This would not just have made us inferior to our chimp-like ancestors – it surely would have killed us. Since Haldane’s dilemma there have been a number of efforts to sweep the problem under the rug, but the problem is still exactly the same. ReMine (1993, 2005) has extensively reviewed the problem, and has analyzed it using an entirely different mathematical formulation – but has obtained identical results. John Sanford PhD. – “Genetic Entropy and The Mystery of the Genome” – pg. 159-160 Kimura’s Quandary Excerpt: Kimura realized that Haldane was correct,,, He developed his neutral theory in response to this overwhelming evolutionary problem. Paradoxically, his theory led him to believe that most mutations are unselectable, and therefore,,, most ‘evolution’ must be independent of selection! Because he was totally committed to the primary axiom (neo-Darwinism), Kimura apparently never considered his cost arguments could most rationally be used to argue against the Axiom’s (neo-Darwinism’s) very validity. John Sanford PhD. – “Genetic Entropy and The Mystery of the Genome” – pg. 161 – 162 Kimura (1968) developed the idea of “Neutral Evolution”. If “Haldane’s Dilemma” is correct, the majority of DNA must be non-functional. – Sanford
In other words, Neutral theory, and the concept of junk DNA, was not developed because of any compelling empirical observation, but was actually developed because it was forced upon Darwinists by the mathematics of population genetics. In plain English, neutral theory, and the concept of junk DNA, is actually the result of a theoretical failure of Darwinian evolution, specifically a failure of natural selection itself, within the mathematics of population genetics!
“many genomic features could not have emerged without a near-complete disengagement of the power of natural selection” Michael Lynch The Origins of Genome Architecture, intro “a relative lack of natural selection may be the prerequisite for major evolutionary advance” Mae Wan Ho Beyond neo-Darwinism: Evolution by Absence of Selection "The publication in 1983 of Motoo Kimura's The Neutral Theory of Molecular Evolution consolidated ideas that Kimura had introduced in the late 1960s. On the molecular level, evolution is entirely stochastic, and if it proceeds at all, it proceeds by drift along a leaves-and-current model. Kimura's theories left the emergence of complex biological structures an enigma (since Natural Selection no longer played a role), but they played an important role in the local economy of belief. They allowed biologists to affirm that they welcomed responsible criticism. "A critique of neo-Darwinism," the Dutch biologist Gert Korthof boasted, "can be incorporated into neo-Darwinism if there is evidence and a good theory, which contributes to the progress of science." By this standard, if the Archangel Gabriel were to accept personal responsibility for the Cambrian explosion, his views would be widely described as neo-Darwinian." - David Berlinski - Majestic Ascent: Berlinski on Darwin on Trial - November 2011 (With the adoption of the 'neutral theory' of evolution by prominent Darwinists, and the casting aside of Natural Selection as a major player in evolution),,, "One wonders what would have become of evolution had Darwin originally claimed that it was simply the accumulation of random, neutral variations that generated all of the deeply complex, organized, interdependent structures we find in biology? Would we even know his name today? What exactly is Darwin really famous for now? Advancing a really popular, disproven idea (of Natural Selection), along the lines of Luminiferous Aether? Without the erroneous but powerful meme of “survival of the fittest” to act as an opiate for the Victorian intelligentsia and as a rationale for 20th century fascism, how might history have proceeded under the influence of the less vitriolic maxim, “Survival of the Happenstance”?" - William J Murray “Darwinism provided an explanation for the appearance of design, and argued that there is no Designer — or, if you will, the designer is natural selection. If that’s out of the way — if that (natural selection) just does not explain the evidence — then the flip side of that is, well, things appear designed because they are designed.” Richard Sternberg – Living Waters documentary Whale Evolution vs. Population Genetics – Richard Sternberg and Paul Nelson – (excerpt from Living Waters video) https://www.youtube.com/watch?v=0csd3M4bc0Q
And when looking at Natural Selection from the physical perspective of what is actually going on physically, then it is very easy to see exactly why Natural Selection is ‘not even wrong’ as an explanation for the ‘apparent design’ we see pervasively throughout life:
The abject failure of Natural Selection on two levels of physical reality – video (2016) (princess and the pea paradox & quarter power scaling) https://uncommondescent.com/evolution/denis-noble-why-talk-about-replacement-of-darwinian-evolution-theory-not-extension/#comment-619802
Thus, since natural selection. i.e. Darwin’s greatest claim to scientific fame, is thrown under the bus by the math of population genetics, (and by empirical evidence itself), then Darwin was certainly NOT a great scientist as many of his present day adherents claim that he was. In fact, Charles Darwin, whose degree was in Theology, and whose book “Origin” is replete with bad liberal theology, is more properly classified as being a bad liberal theologian who was trying to impose his anti-Theistic beliefs onto science rather than as a great scientist who was trying to discover new truths about the world through experimentation.
Charles Darwin’s use of theology in the Origin of Species – STEPHEN DILLEY Abstract This essay examines Darwin’s positiva (or positive) use of theology in the first edition of the Origin of Species in three steps. First, the essay analyses the Origin’s theological language about God’s accessibility, honesty, methods of creating, relationship to natural laws and lack of responsibility for natural suffering; the essay contends that Darwin utilized positiva theology in order to help justify (and inform) descent with modification and to attack special creation. Second, the essay offers critical analysis of this theology, drawing in part on Darwin’s mature ruminations to suggest that, from an epistemic point of view, the Origin’s positiva theology manifests several internal tensions. Finally, the essay reflects on the relative epistemic importance of positiva theology in the Origin’s overall case for evolution. The essay concludes that this theology served as a handmaiden and accomplice to Darwin’s science. http://journals.cambridge.org/action/displayAbstract;jsessionid=376799F09F9D3CC8C2E7500BACBFC75F.journals?aid=8499239&fileId=S000708741100032X
To this day, since there is no experimental support for Darwinian evolution, bad liberal theology is still pervasive in the arguments of leading apologists for Darwinism:
Methodological Naturalism: A Rule That No One Needs or Obeys – Paul Nelson – September 22, 2014 Excerpt: It is a little-remarked but nonetheless deeply significant irony that evolutionary biology is the most theologically entangled science going. Open a book like Jerry Coyne’s Why Evolution is True (2009) or John Avise’s Inside the Human Genome (2010), and the theology leaps off the page. A wise creator, say Coyne, Avise, and many other evolutionary biologists, would not have made this or that structure; therefore, the structure evolved by undirected processes. Coyne and Avise, like many other evolutionary theorists going back to Darwin himself, make numerous “God-wouldn’t-have-done-it-that-way” arguments, thus predicating their arguments for the creative power of natural selection and random mutation on implicit theological assumptions about the character of God and what such an agent (if He existed) would or would not be likely to do.,,, ,,,with respect to one of the most famous texts in 20th-century biology, Theodosius Dobzhansky’s essay “Nothing in biology makes sense except in the light of evolution” (1973). Although its title is widely cited as an aphorism, the text of Dobzhansky’s essay is rarely read. It is, in fact, a theological treatise. As Dilley (2013, p. 774) observes: “Strikingly, all seven of Dobzhansky’s arguments hinge upon claims about God’s nature, actions, purposes, or duties. In fact, without God-talk, the geneticist’s arguments for evolution are logically invalid. In short, theology is essential to Dobzhansky’s arguments.”,, http://www.evolutionnews.org/2014/09/methodological_1089971.html
Darwinism is as unscientific today, if not more so, as it was when it was first introduced:
An Early Critique of Darwin Warned of a Lower Grade of Degradation – Cornelius Hunter – Dec. 22, 2012 Excerpt: “Many of your wide conclusions are based upon assumptions which can neither be proved nor disproved. Why then express them in the language & arrangements of philosophical induction?” (Sedgwick to Darwin – 1859),,, And anticipating the fixity-of-species strawman, Sedgwick explained to the Sage of Kent (Darwin) that he had conflated the observable fact of change of time (development) with the explanation of how it came about. Everyone agreed on development, but the key question of its causes and mechanisms remained. Darwin had used the former as a sort of proof of a particular explanation for the latter. “We all admit development as a fact of history;” explained Sedgwick, “but how came it about?”,,, For Darwin, warned Sedgwick, had made claims well beyond the limits of science. Darwin issued truths that were not likely ever to be found anywhere “but in the fertile womb of man’s imagination.” The fertile womb of man’s imagination. What a cogent summary of evolutionary theory. Sedgwick made more correct predictions in his short letter than all the volumes of evolutionary literature to come. http://darwins-god.blogspot.com/2012/12/an-early-critique-of-darwin-warned-of.html
bornagain77
December 4, 2016
December
12
Dec
4
04
2016
08:17 AM
8
08
17
AM
PDT
On your pi example, we actually do have some information about who/whatever did that: they probably have 10 digits (and hence cannot be God, who has 13). Frankly, it's a distraction because it's contrived, and still doesn't get to the issue - have you (or someone else) properly tested dFSCI's ability to classify designed and non-designed objects?Bob O'H
December 4, 2016
December
12
Dec
4
04
2016
07:37 AM
7
07
37
AM
PDT
gpuccio - how many non-designed objects have you tried to calculate dFSCI for?Bob O'H
December 4, 2016
December
12
Dec
4
04
2016
07:15 AM
7
07
15
AM
PDT
Bob O'H: I apologize. Here are the working links: https://uncommondescent.com/intelligent-design/defining-design/ https://uncommondescent.com/intelligent-design/functional-information-defined/ https://uncommondescent.com/intelligent-design/an-attempt-at-computing-dfsci-for-english-language/ https://uncommondescent.com/intelligent-design/homologies-differences-and-information-jumps/ The work is easily done. The posts in this thread, most works of literature or poetry, most software code, most machines with a minimum of complexity, paintings, and so on, are all good examples of complex functional information. Most of them well above the threshold of 500 specific bits. For a quantitative analysis for language, please look at my OP about english language. All these examples exhibit functionally specified information. Many of them (language, software) are digital, and are therefore good examples of dFSCI. All of them are human artifacts, and we can directly or indirectly assess their origin from design processes traceable to one or more conscious intelligent designers. On the contrary, I am not aware of any example of non desogned objects that exhibit FSI, or even better, dFSI, above the threshold of 500 bits. Here, neither I nor you nor enyone else can "do the work". There are no examples, period. If you believe differently, please offer at least one counter-example. Try this: generate as many random sequences by some random generator (characters, numbers, binary digits, whatever you like) of 500 bits complexity or more, and try to find some independent function (you can define any function you like) that requires at least 500 specific bits in the sequence, and that can be implemented by the sequence itself. You say: "I didn’t comment on 1b specifically, but a similar argument applies: you need to demonstrate this with evidence, not just assert it. To be honest, it doesn’t seem particularly controversial: you seem to be (loosely!) saying that intelligence can produce complicated stuff. I’m not sure anyone would doubt that." IOWs, you are confirming my point: there is a specific rationale that links functional complexity to the subjective experiences of understanding and purpose. This needs not be demonstrated with evidence": it is a rationale, something that makes the design hypothesis consistent and credible. The evidence to demonstrate it comes from the empirical observations, IOW from point 1a. Finally, I invite you to comment on the following example. Let's say that humans reach a faraway planet, where there is no life and no trace of civilization. The astronauts cone to a stone wall in a mountain. On it, they observe a group of peculiar simple signs "carved" in the stone. Let's say ten loose rows of signs, each made loosely of 50 signs. Let's say that the signs are of two kinds (maybe deeper or less deep), so that they can be unequivically be grouped in two categories. Let's say that one of the astronauts, with some mathematical background, oberves that the signs can be read as binary digits. Let's say that the same astronaut, after a few attempts, finds that, reading the sequence from left to right and top down, we can derive a sequence of 500 bits and that, choosing one of the two possible assignations for 0 and 1, and if we group the sequence in binary words of 4 bits each, and interpret them as decimal digits, what we get is: 31415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938 that is the first 125 decimal digits of pi. Now, let's say that the signs, for their physical nature, could well be interpreted as the result of some non designed event, like some fall of micro meteorites, or anything else. But their configuration? I will state the obvious. 1) There is no imaginable law of necessity that can explain that specific configuration. 2) There is a definite functional definition for the configuration: any sequence of bits which, read as 4 bit words, convey the first 125 decimal digits of pi (which is an objective mathematical constant). 3) The probability of getting that specific (unique) configuration in a random system is of the order of 2^-500, Dembski's UPB. So, you can choose: 1) You stick to the idea that the sequence is the result of some random event (micro-meteorites, or else). 2) You seriously consider the explanation that the sequence was designed by aliens or by someone else. Someone, obviously, who knew the meaning of pi, and had some reason to carve it in the stone. Please, note that we have no information at all about the possible designer, its nature, its motives, its methods. Nothing on the planet helps. Your comment?gpuccio
December 4, 2016
December
12
Dec
4
04
2016
06:30 AM
6
06
30
AM
PDT
gpuccio - thanks. it looks like you're punting on the "obvious point". I'm not going to do your work for you: you need to show that your method works for most/all cases. This is how we do things in science (BTW, the links go to the pages where you can edit the pages, so I don't have access) I didn't comment on 1b specifically, but a similar argument applies: you need to demonstrate this with evidence, not just assert it. To be honest, it doesn't seem particularly controversial: you seem to be (loosely!) saying that intelligence can produce complicated stuff. I'm not sure anyone would doubt that.Bob O'H
December 4, 2016
December
12
Dec
4
04
2016
02:04 AM
2
02
04
AM
PDT
Bob O'H: "has anyone gone round and looked at a variety of designed and non-designed objects, and shown that functional complexity is higher in designed objects? That would be the obvious first step to making this positive argument hold water." It is rather obvious. However, I have had long debates with some of the best ID critics, and we have even made a sort of "game" where they were invited to offer even one counter-example which could falsify my statement about functional complexity. Nobody could do it. You are invited, too. The simple truth is that you will never find any object in the universe whose origin is well known (either designed or not designed) which exhibits more than 500 bits of digital functionally specified information (what I call dFSCI). The game is: show me one such object, and you will have falsified the theory, at least according to my presentation of it. You are welcome to try. For my definitions of dFSCI, and examples of how to measure it, you can look at my posts here: https://uncommondescent.com/wp-admin/post.php?post=57553&action=edit https://uncommondescent.com/wp-admin/post.php?post=59796&action=edit https://uncommondescent.com/wp-admin/post.php?post=64884&action=edit https://uncommondescent.com/wp-admin/post.php?post=76913&action=edit By the way, I see that you have commented on my point 1a, but not on my point 1b. That is important, too, for the discussion that was raised here.gpuccio
December 3, 2016
December
12
Dec
3
03
2016
11:01 AM
11
11
01
AM
PDT
gpuccio @ 4 - (sorry for not replying earlier - I've been travelling)
The positive empirical argument for design inference is that functional complexity higher than some appropriate threshold is observed only in designed objects (human artifacts),
Is practice this argument is an assertion, but I'm not aware of any rigorous testing of it. has anyone gone round and looked at a variety of designed and non-designed objects, and shown that functional complexity is higher in designed objects? That would be the obvious first step to making this positive argument hold water.Bob O'H
December 3, 2016
December
12
Dec
3
03
2016
05:10 AM
5
05
10
AM
PDT
Actually it is accurate that creationists generally only discredit evolution theory and don't do creation science. When you would be doing actual straightforward creation science, then you would focus on the actual act of creation itself, and not on the result of it. But no creationist does that except me. The mechanism of creation is choosing, but 0 creationists have any interest in doing science about how things are chosen in the universe. Millions of creationists but 0 of them have interest in doing science about it.mohammadnursyamsu
December 2, 2016
December
12
Dec
2
02
2016
06:21 PM
6
06
21
PM
PDT
On the original post, not the comments: This is interesting. I will have to think it out a while. I do believe that I am one who has been suffering from this confusion.bFast
December 2, 2016
December
12
Dec
2
02
2016
04:46 PM
4
04
46
PM
PDT
Bob:
Anyway, the “X, Y, Z indicate design” part of ID barely exists: it’s a bold assertion (“lots of intricate parts indicate design”) but isn’t explored in any detail.
If you have an issue with the boldness of the assertion or the lack of exploration, maybe you should take that up with Dawkins and his claim that living things have the appearance of design?Phinehas
December 2, 2016
December
12
Dec
2
02
2016
12:59 PM
12
12
59
PM
PDT
Hey JB, thanks for the video link on Specified Complexity. Great stuff! I watched the whole thing and understood every bit of it. But I don't doubt ID-opponents will somehow manage to remain perpetually perplexed and confused by the concept.Phinehas
December 2, 2016
December
12
Dec
2
02
2016
12:54 PM
12
12
54
PM
PDT
Here's an example of serious comments by gpuccio that may help to clarify some potential confusion about ID. BTW, I have 'official' permission to quote gpuccio's comments anywhere in this site. :) This was posted @28 in an interesting discussion thread last April:
gpuccio’s excellent comments posted in this thread (this far) are literally textbook material and could be a separate new OP in UD: [I have used some ‘artistic freedom’ to make minor adjustments to the lyrics so that it fits within the melody, without changing the meaning of the author’s message] 1. Epigenetics is a constant interaction between a static form of information (the nucleotide sequence stored in DNA, both protein coding and non coding) and its dynamic expression as transcriptomes and proteomes in each different cell state. In that sense, there is no condition in the cell life which is not at the same time genetic and epigenetic. For example, the zygote which originates multicellular beings has its own distinctive epigenetic state: the DNA is expressed in the zygote in different ways than it will be expressed in different states of the embryo, or in different specific tissue cells, both stem cells and differentiated cells. The epigenetic state of the zygote, in turn, is derived mainly from the cytoplasm of the oocyte, but also from epigenetic messages in the sperm cell. So, at each moment of the life of a cell, or even more of a multicellular being, the total information which is being expressed is a sum of genetic and epigenetic information. And, whatever you may think, any theory about the origin of biological information must explain how the total information content which is expressed during the life span of some biological being came into existence. 2. Does the actual “information” still rely on the DNA.? Not all of it, certainly. The cytoplasm, as I said, bears information too. And so does the state in which DNA is when it is transmitted in cell division. There is never a moment where DNA is in some “absolute” state. It is always in some epigenetic state. And the cytoplasm, or the nucleus itself as a whole, have specific information content at each state. The sum total of proteins and RNAs expressed, for example. As “life always comes from life”, life is always a continuous dynamic expression of genetic and epigenetic information. When Venter builds his “artificial” genomes, copying and modifying natural genomes, he has to put them into a living cell. IOWs, he is introducing a modified genetic component into a specific existing epigenetic condition. Remember, life is a dynamic, far from equilibrium condition, not a static storage of information. 3. Haven’t evolutionists known this for decades? Not exactly. The huge complexity of epigenetic networks, the whole complex and parallel levels which contribute to them (DNA methylation, histone code, topologically associated domains and dynamic 3d DNA structures, the various influences of different regulatory RNAs, the incredibly combinatorial complexity of transcription factors, the role of master regulators in differentiation, are all topics which have been “discovered” recently enough, and all of them are still really poorly understood. Whatever controls and coordinates the whole system of epigenetic regulations, moreover, is still a true mystery, be it in DNA or elsewhere. 4. I would like to mention here that epigenetics has at least two rather different aspects. One is the way that biological beings can interact with the outer environment, and then pass some information derived from that environment to further generations, through persistent epigenetic adaptations. This is what we could call the “Lamarckian” aspect of epigenetics. It is an aspect which is now well proven and partly understood, and it is certainly interesting. But, IMO, the truly revolutionary aspect of epigenetics is the complex network of regulations that allow different expressions of the same genome under different biological conditions, especially cell differentiation. That aspect has practically nothing to do with environment, either outer or inner, if we intend environment as something which is independent of the biological being, and which can modify its responses according to unpredictable, random influences. Indeed, this second aspect of epigenetics is all about information, and the management of information. IOWs, it’s the biological being itself which in some way guides and controls its own development. Now, you seem to believe that any form of such control must necessarily originate from the genome, because we have thought for a long time that the genome was the only depository of transmissible information. But today we know that the simple sequence of nucleotides in the genome is not enough. I will try to be more clear. In Metazoa, we have hundreds, maybe thousands, of different genomic expressions from the same genome. In the same being. How is that possible? DNA is a rather static form of information, in a sense: it is just a sequence of nucleotides. That sequence can be of extreme importance, but in itself it has no power. For example, even a protein coding gene is of no use if it is not “used” by the complex transcription / translation machinery. So, let’s say that we have a zygote. Let’s call its genetica information G1. G1 is not the basic DNA sequence which is the genome, but the specific DNA in the zygote condition, with all the modifications which make it partly expressed and partly inhibited, in different ways and grades. So, it is not “the genome”, but “one of the possible forms of the genome”. At the same time, the zygote has an active epigenome, in the cytoplasm and the nucleus, in the form of proteins (especially transcription factors), RNAs, and so on. IOWs, we have a specific transcriptome and proteome of the zygote, which we can call E1. So, we have: Zygote = G1 + E1 Now, the important point is that even in the “stable” condition of that zygote (IOWs, before any further differentiation happens) the flow of information goes both ways: from G1 to E1, and vice versa. The existing epigenome can and does modify the state of the existing genome, and vice versa. IOWs: G1 -> <- E1 Now, let's say that the zygote divides, and becomes two cells which are no more a zygote. IOWs, we have a division with some differentiation. Now, in each of the two daughter cells (in the simpler case of a symmetric division) there is a new dynamic state: G2 E2 Both the genomic state and the epigenomic state have changed, and that’s exactly what makes the daughter cell “different”: IOWs, cell differentiation. Now, the points I would like to stress are the following: 1) Any known and existing state of a living cell or being is always the sum of some G + some E. There is no example of any isolated G or E capable of generating a living being. 2) We really don’t know what guides the transition from any G1 + E1 state to the differentiated G2 + E2 state. We know much of what is involved in the transition, of what is necessary, and of how many events take place. But the really elusive question is: what kind of information initiates the specific transition, and chooses what kind of transition will happen, and regulates the process? Is it part of G1? Is it part of E1? Or, more likely, some specific combination of both? IOWs, I would suggest to consider as biological information not only the sequence of nucleotides in the basic genome, but also all the complex forms that G and E take in specific and controlled sequences. At any state, the information present is always the sum total of a specific G and a specific E, and never simply the basic genome G. Now, whatever you may think, or hope, the same evolutionary science that you invoke, and that has never been able to explain the origin of a single complex functional protein (but at least has tried), has really nothing to say about those epigenetic regulatory networks, for two very simple reasons: a) For the greatest part, we have no idea of where the information is, and it’s really difficult to explain what we don’t know and understand. b) The part that we know and understand (and it’s now a rather huge part) is simply too complex and connected [interwoven?] to even try any traditional explanation in terms of RV + NS. That is the simple situation. Science is a great and wonderful process, especially if it is humble, and tries to understand things instead of simply declaring that it can explain what it does not understand. 5. “Is it not utterly mysterious that an existing epigenome can cope with genomes modified by Venter?” Yes, it is. I am amazed each time I think of it. As it is amazing that the epigenome in the oocyte can cope with a differentiated nucleus in cloning experiments based on somatic cell nuclear transfer. The epigenome seems to be a very powerful entity, indeed. “Do you agree that DNA is not a conceivable candidate for controlling and/or coordinating the epigenome?” The only thing that I can say is that something controls and guides the G+E entity (the whole biological being), and that at present we really don’t know what it is and where the information that must be necessary for the process is written. We know too little. I usually sum it up with the old question: where and how are the procedures written? 6. I really think that the “master controller” of differentiation still eludes us. We know rather well a lot of epigenetic landscapes which correspond to differentiation procedures, and the role of many agents in those procedures. But still, it’s the “control” which eludes our understanding. IOWs, what decides the specific landscape which will be implemented in a specific moment, and what controls the correct implementation, through the correct resources? And how are the different scenarios implemented? The role of DNA is certainly important, but we still have to understand a lot about how DNA performs such a role. At present, we must assume that the sum of genome and epigenome at each moment has the information to achieve the correct destiny of the cell, and the tools to read and implement that information into specific epigenetic pathways. 7. I agree with you, and I am perfectly aware of how much has been discovered. Indeed, if you read my post #22, I state: “We know rather well a lot of epigenetic landscapes which correspond to differentiation procedures, and the role of many agents in those procedures. But still, it’s the “control” which eludes our understanding. IOWs, what decides the specific landscape which will be implemented in a specific moment, and what controls the correct implementation, through the correct resources?” The problem, as I see it, is that we are acquiring a lot of details about the pathways which are activated in various forms of differentiation, but we still cannot understand the control of those choices. In a software you can have many different functions, or objects, and then you have higher level procedures which use them according to some well-designed plan, which requires a lot of information. Both the information in the functions and objects and the information in the higher level procedures are needed. My point is simply that in biological differentiation we still don’t understand where the information about higher level procedures is, and how it works. There are some interesting concepts which are being proposed. For example, I am very intrigued by suggestions about how decisions about staminality and differentiation are made in cell pools, and how stem cells could work as a partially stochastic system to implement decisions. However, I still find that we understand very little about informational organization of cell differentiation, although I daily try to read new papers about that topic, hoping to find new hints. 8. Isn’t the signaling the control? No. The control is deciding when and how and how much [and where?] a signal must be implemented. The gene for BMP4 is always there, in the genome. All signals are potentially there. All transcription factors, and everything which can be potentially transcribed. The problem is: each epigenetic landscape is characterized by multiple and complex choices of signals. How can the cell “decide” and know which sequence of signals will be implemented at each time? How is the correct transcription of the BMP4 gene, and its translation, correctly implemented at the right time? What we know is essentially that some transcription factors or other molecule are necessary for some transition, and that they are expressed at the right moment, at the right place, and in the right quantity when that transition has to happen. But how is that achieved? That is a different question. The genome is a book which can be read in hundreds of different ways. There is purpose and information in the control of the ways it is read at each moment. There are hundreds or thousands of different signals, and only the right mix of them can work. Moreover, there must be flexibility, error correction, response to environmental stimuli, and so on. [robustness?] Do you really believe that only because we understand how some signals are involved in some processes we know how those processes are decided and controlled? Do you really believe that “The signaling is the control”? You, with all your understanding of informational problems? A signal is a control only when correctly used by a controller.
https://uncommondescent.com/intelligent-design/name-it-claim-it-epigenetics-now-just-another-evolutionary-mechanism/#comment-603942Dionisio
December 2, 2016
December
12
Dec
2
02
2016
05:29 AM
5
05
29
AM
PDT
Very interesting discussion. Just what the doctor ordered! :) Keep it up! Thank y'all!Dionisio
December 2, 2016
December
12
Dec
2
02
2016
04:50 AM
4
04
50
AM
PDT
1 5 6 7 8

Leave a Reply