Uncommon Descent Serving The Intelligent Design Community

Once More from the Top on “Mechanism”

Categories
Intelligent Design
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

We often get some variation of “Until ID proposes a ‘mechanism’ for how the design is accomplished, it cannot be taken seriously as an explanation for origins.”

Here is an example from frequent commenter Bob O’H (who, after years of participation on this site should know better):

If ID is correct, then the design has to have happened somehow, so a “how” theory has to exist.

OK, Bob, once more from the top:

Suppose someone printed your post on a piece of paper and handed it to an investigator.  We’ll call him Johnny.  The object of the investigation is to determine whether the text on the paper was produced by an intelligent agent or a random letter generator. 

Johnny, using standard design detection techniques, concludes that the text exhibits CSI at greater than 500 bits, and reaches the screamingly obvious conclusion that it was designed and not the product of a random letter generator.

“Ha!” the skeptic says.  “Johnny did not propose a mechanism by which someone designed the text.  Therefore his design inference is invalid.  If his design inference is correct, then the design has to have happened somehow, so a ‘how’ theory has to exist.”

Bob, is the objection to Johnny’s conclusion valid?

Comments
ET to PU:
"That paper doesn’t contain any science to support its claims. the authors say what they wrote is “sketchy and speculative”
To wit:
The Origin of Prebiotic Information System in the Peptide/RNA World: A Simulation Model of the Evolution of Translation and the Genetic Code Sankar Chatterjee1,* and Surya Yadav2 - March 2019 Excerpt: Discussion and Conclusions,,, "The scenarios for the origin of the translation machinery and the genetic code that are outlined here are both sketchy and speculative,,,, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6463137/
On top of that the paper also honestly admitted that, "life is more sophisticated than any man-made computer system,,,"
we suggest that life is more sophisticated than any man-made computer system where the software/hardware dichotomy is blurred and integrated. We find that this computer analogy too simplistic. Both the informational and functional biopolymers in the translational machinery can be viewed as highly mobile molecular nanobots, which are fully equipped with both the information and the material that are needed to accomplish their tasks. These nanobots ‘know’ how to put themselves together by self-assembly or by cooperation with other molecules.
Yet, even though they conceded that their attempts to explain how something "more sophisticated than any man-made computer system" came about were quote unquote "sketchy and speculative", none-the-less, PU, without a hint that he actually understood what they wrote in the paper, claimed that,
"The given paper presents very solid scientific proofs."
To which ET responded
"stop spewing lies"
Ditto! To which I can only add that PU's comment is pure poppycock!bornagain77
September 3, 2019
September
09
Sep
3
03
2019
07:37 AM
7
07
37
AM
PDT
PavelU- I have read the paper and I know you are lying about what it contains. What I posted in 63 are facts. That paper doesn't contain any science to support its claims. the authors say what they wrote is "sketchy and speculative"- the AUTHORS said that. So please stop spewing lies.ET
September 3, 2019
September
09
Sep
3
03
2019
06:21 AM
6
06
21
AM
PDT
Bornagain77 @62 & ET @63: It’s all well described in the given paper. Just read it and try to understand it well. It may take some time to get through all the details, which are so thoroughly and extensively described. This paper is such a game changer that no further debate is necessary. Perhaps that’s why Dr Swamidass decided to stop the discussion with Gpuccio at the Peaceful Science website. Game over. Their argument is more reasonably scientific than your philosophical guessing. They won. You all lost. Read the paper suggested by Professor Art Hunt and be sufficiently humble to accept that fact. You may want to call your experts Dr Behe et al. to give you a hand understanding that paper.PavelU
September 3, 2019
September
09
Sep
3
03
2019
06:15 AM
6
06
15
AM
PDT
PavelU- That paper didn't present anything but speculation. There wasn't any proofs. No one has ever shown that nature can produce replicating RNAs. No one has ever shown that nature can produce tRNAs. No one has shown that nature can produce proteins. All that paper represents is a narrative- it may sound like science but it lacks science.ET
September 3, 2019
September
09
Sep
3
03
2019
05:49 AM
5
05
49
AM
PDT
PU states "The given paper presents very solid scientific proofs." Really??? Perhaps you would like to lay out some of those supposedly "very solid scientific proofs" that you imagine exist in the paper??? Like say for instance, perhaps you could lay out the "very solid scientific proof" for the origin of a single functional protein? (see post 58).bornagain77
September 3, 2019
September
09
Sep
3
03
2019
04:32 AM
4
04
32
AM
PDT
Upright BiPed @57: You did not present any convincing argument against the paper ET cited @54 as referenced by professor Art Hunt. The given paper presents very solid scientific proofs. Do you have anything to say that won’t make me yawn? You said that you’re familiar with that paper. Did you read it well? Did you understand it? Do you see anything that may disqualify or at least weaken the authors’ conclusions?PavelU
September 3, 2019
September
09
Sep
3
03
2019
04:12 AM
4
04
12
AM
PDT
In further critique of naturalistic origin of life (OOL) scenarios:
Origin of Life: Intelligence Required (Science Uprising 05) - video - July 2019 https://www.youtube.com/watch?v=Ymjlrw6GmKU
In response to the preceding video, a Darwinist quipped,
"Yes, we know. It’s still a mystery. Mystery does not automatically mean God did it." - Seversky
To which I responded as such,
No it does not ‘automatically’ imply that God did it. We would have to put some further scientific evidence behind what we already have. Let’s see is we can help Sev find that scientific evidence (for God).
In the video, Dr James Tour, (who is very well respected for his breakthroughs in synthetic chemistry, and who is regarded as one of the top ten synthetic chemists in the world), states the insurmountable problem for atheistic materialists as such:
“We have no idea how to put this structure (a simple cell) together.,, So, not only do we not know how to make the basic components, we do not know how to build the structure even if we were given the basic components. So the gedanken (thought) experiment is this. Even if I gave you all the components. Even if I gave you all the amino acids. All the protein structures from those amino acids that you wanted. All the lipids in the purity that you wanted. The DNA. The RNA. Even in the sequence you wanted. I’ve even given you the code. And all the nucleic acids. So now I say, “Can you now assemble a cell, not in a prebiotic cesspool but in your nice laboratory?”. And the answer is a resounding NO! And if anybody claims otherwise they do not know this area (of research).” – James Tour: The Origin of Life Has Not Been Explained – 4:20 minute mark https://youtu.be/r4sP1E1Jd_Y?t=255
What Dr. Tour touched upon in that preceding comment is the fact that having the correct sequential information in DNA is not nearly enough. Besides the sequential information in DNA there is also a vast amount of ‘positional information’ in the cell that must be accounted for as well. The positional information that is found to be in a simple one cell bacterium, when working from the thermodynamic perspective, is found to be on the order 10 to the 12 bits,,, which is several orders of magnitude more information than the amount of sequential information that is encoded on the DNA of a ‘simple’ bacterium.
Biophysics – Information theory. Relation between information and entropy: – Setlow-Pollard, Ed. Addison Wesley Excerpt: Linschitz gave the figure 9.3 x 10^12 cal/deg or 9.3 x 10^12 x 4.2 joules/deg for the entropy of a bacterial cell. Using the relation H = S/(k In 2), we find that the information content is 4 x 10^12 bits. Morowitz’ deduction from the work of Bayne-Jones and Rhees gives the lower value of 5.6 x 10^11 bits, which is still in the neighborhood of 10^12 bits. Thus two quite different approaches give rather concordant figures. http://www.astroscu.unam.mx/~angel/tsb/molecular.htm
,,, Which is the equivalent of 100 million pages of Encyclopedia Britannica. ‘In comparison,,, the largest libraries in the world,, have about 10 million volumes or 10^12 bits.”
“a one-celled bacterium, e. coli, is estimated to contain the equivalent of 100 million pages of Encyclopedia Britannica. Expressed in information in science jargon, this would be the same as 10^12 bits of information. In comparison, the total writings from classical Greek Civilization is only 10^9 bits, and the largest libraries in the world – The British Museum, Oxford Bodleian Library, New York Public Library, Harvard Widenier Library, and the Moscow Lenin Library – have about 10 million volumes or 10^12 bits.” – R. C. Wysong – The Creation-evolution Controversy ‘The information content of a simple cell has been estimated as around 10^12 bits, comparable to about a hundred million pages of the Encyclopedia Britannica.” Carl Sagan, “Life” in Encyclopedia Britannica: Macropaedia (1974 ed.), pp. 893-894
And in regards to this vast amount of positional information that must be accounted for in order to account for the Origin of Life, in the following 2010 experimental realization of Maxwell’s demon thought experiment, it was demonstrated that knowledge of a particle’s location and/or position converts information into energy.
Maxwell’s demon demonstration turns information into energy – November 2010 Excerpt: Scientists in Japan are the first to have succeeded in converting information into free energy in an experiment that verifies the “Maxwell demon” thought experiment devised in 1867.,,, In Maxwell’s thought experiment the demon creates a temperature difference simply from information about the gas molecule temperatures and without transferring any energy directly to them.,,, Until now, demonstrating the conversion of information to energy has been elusive, but University of Tokyo physicist Masaki Sano and colleagues have succeeded in demonstrating it in a nano-scale experiment. In a paper published in Nature Physics they describe how they coaxed a Brownian particle to travel upwards on a “spiral-staircase-like” potential energy created by an electric field solely on the basis of information on its location. As the particle traveled up the staircase it gained energy from moving to an area of higher potential, and the team was able to measure precisely how much energy had been converted from information. http://www.physorg.com/news/2010-11-maxwell-demon-energy.html
And as the following 2010 article stated about the preceding experiment, “This is a beautiful experimental demonstration that information has a thermodynamic content,”
Demonic device converts information to energy – 2010 Excerpt: “This is a beautiful experimental demonstration that information has a thermodynamic content,” says Christopher Jarzynski, a statistical chemist at the University of Maryland in College Park. In 1997, Jarzynski formulated an equation to define the amount of energy that could theoretically be converted from a unit of information2; the work by Sano and his team has now confirmed this equation. “This tells us something new about how the laws of thermodynamics work on the microscopic scale,” says Jarzynski. http://www.scientificamerican.com/article.cfm?id=demonic-device-converts-inform
And as the following 2017 article states: James Clerk Maxwell (said), “The idea of dissipation of energy depends on the extent of our knowledge.”,,, quantum information theory,,, describes the spread of information through quantum systems.,,, Fifteen years ago, “we thought of entropy as a property of a thermodynamic system,” he said. “Now in (quantum) information theory, we wouldn’t say entropy is a property of a system, but a property of an observer who describes a system.”,,,
The Quantum Thermodynamics Revolution – May 2017 Excerpt: the 19th-century physicist James Clerk Maxwell put it, “The idea of dissipation of energy depends on the extent of our knowledge.” In recent years, a revolutionary understanding of thermodynamics has emerged that explains this subjectivity using quantum information theory — “a toddler among physical theories,” as del Rio and co-authors put it, that describes the spread of information through quantum systems. Just as thermodynamics initially grew out of trying to improve steam engines, today’s thermodynamicists are mulling over the workings of quantum machines. Shrinking technology — a single-ion engine and three-atom fridge were both experimentally realized for the first time within the past year — is forcing them to extend thermodynamics to the quantum realm, where notions like temperature and work lose their usual meanings, and the classical laws don’t necessarily apply. They’ve found new, quantum versions of the laws that scale up to the originals. Rewriting the theory from the bottom up has led experts to recast its basic concepts in terms of its subjective nature, and to unravel the deep and often surprising relationship between energy and information — the abstract 1s and 0s by which physical states are distinguished and knowledge is measured.,,, Renato Renner, a professor at ETH Zurich in Switzerland, described this as a radical shift in perspective. Fifteen years ago, “we thought of entropy as a property of a thermodynamic system,” he said. “Now in (quantum) information theory, we wouldn’t say entropy is a property of a system, but a property of an observer who describes a system.”,,, https://www.quantamagazine.org/quantum-thermodynamics-revolution/
Again to repeat that last sentence, “Now in (quantum) information theory, we wouldn’t say entropy is a property of a system, but a property of an observer who describes a system.”” Think about that statement for a second. That statement should send a chill down the spine of every ID proponent. These experiments completely blow the reductive materialistic presuppositions of Darwinian evolution, (presuppositions about information being merely ’emergent’ from some material basis), entirely out of the water and also directly show that information is a property of an 'observer' who describes the system and is not a property of the (material) system itself as Darwinists presuppose. On top of that, 'classical' sequential information is found to be a subset of quantum positional information by the following method: Specifically, in the following 2011 paper, researchers ,,, show that when the bits (in a computer) to be deleted are quantum-mechanically entangled with the state of an observer, then the observer could even withdraw heat from the system while deleting the bits. Entanglement links the observer's state to that of the computer in such a way that they know more about the memory than is possible in classical physics.,,, In measuring entropy, one should bear in mind that (in quantum information theory) an object does not have a certain amount of entropy per se, instead an object's entropy is always dependent on the observer.
Quantum knowledge cools computers: New understanding of entropy - June 1, 2011 Excerpt: Recent research by a team of physicists,,, describe,,, how the deletion of data, under certain conditions, can create a cooling effect instead of generating heat. The cooling effect appears when the strange quantum phenomenon of entanglement is invoked.,,, The new study revisits Landauer's principle for cases when the values of the bits to be deleted may be known. When the memory content is known, it should be possible to delete the bits in such a manner that it is theoretically possible to re-create them. It has previously been shown that such reversible deletion would generate no heat. In the new paper, the researchers go a step further. They show that when the bits to be deleted are quantum-mechanically entangled with the state of an observer, then the observer could even withdraw heat from the system while deleting the bits. Entanglement links the observer's state to that of the computer in such a way that they know more about the memory than is possible in classical physics.,,, In measuring entropy, one should bear in mind that an object does not have a certain amount of entropy per se, instead an object's entropy is always dependent on the observer. Applied to the example of deleting data, this means that if two individuals delete data in a memory and one has more knowledge of this data, she perceives the memory to have lower entropy and can then delete the memory using less energy.,,, No heat, even a cooling effect; In the case of perfect classical knowledge of a computer memory (zero entropy), deletion of the data requires in theory no energy at all. The researchers prove that "more than complete knowledge" from quantum entanglement with the memory (negative entropy) leads to deletion of the data being accompanied by removal of heat from the computer and its release as usable energy. This is the physical meaning of negative entropy. Renner emphasizes, however, "This doesn't mean that we can develop a perpetual motion machine." The data can only be deleted once, so there is no possibility to continue to generate energy. The process also destroys the entanglement, and it would take an input of energy to reset the system to its starting state. The equations are consistent with what's known as the second law of thermodynamics: the idea that the entropy of the universe can never decrease. Vedral says "We're working on the edge of the second law. If you go any further, you will break it." http://www.sciencedaily.com/releases/2011/06/110601134300.htm
To say that "entropy is always dependent on the observer" is antithetical to the materialistic presuppositions of Darwinian evolution is to make a severe understatement. An ocean of ink has been spilled by Darwinists arguing against Intelligence. Much less will Darwinists concede that an Intelligent "Observer" is needed for the Origin of Life (nor the subsequent diversification of life). In fact Intelligent "Observers" don't even come into play in the Darwinian scenario until long after the origin of life. And even then Darwinists have argued, via population genetics, that our observations are illusory and therefore unreliable. (Donald Hoffman) Of course, Darwinists could, like Dawkins and Crick did, appeal to Intelligent Extra-Terrestrials in order to try to 'explain away' the Origin of Life and avoid the obvious Theistic implications that follow from these recent developments in quantum information theory, but that evidence-free act of desperation on their part, number 1, concedes the necessity of Intelligence in explaining the Origin of Life, and number 2, only pushes the problem back into imaginative and basically untestable speculations. So to further establish that the Designer of Life must be God, it is necessary to point out that “quantum information” is, number one, found to be ubiquitous within life:
Darwinian Materialism vs. Quantum Biology – Part II – video https://www.youtube.com/watch?v=oSig2CsjKbg
And number two, quantum information in particular requires a non-local, beyond space and time, cause in order to explain its existence: As the following article stated, “Our result gives weight to the idea that quantum correlations somehow arise from outside spacetime, in the sense that no story in space and time can describe them,”
Looking beyond space and time to cope with quantum theory – October 28, 2012 Excerpt: “Our result gives weight to the idea that quantum correlations somehow arise from outside spacetime, in the sense that no story in space and time can describe them,” says Nicolas Gisin, Professor at the University of Geneva, Switzerland, and member of the team. https://www.sciencedaily.com/releases/2012/10/121028142217.htm
Darwinian materialists simple have no beyond space and time cause to appeal to in order to explain this quantum information, whereas Christian Theists do:
Colossians 1:17 He is before all things, and in him all things hold together.
Bottom line, these developments in quantum information theory go to the very heart of the ID vs. Evolution debate and directly falsify, number one, Darwinian claims that immaterial information is merely ’emergent’ from some material basis. And number two, these experimental realizations of the Maxwell’s demon thought experiment go even further and also directly validate a primary claim from ID proponents. Specifically, the primary claim that an Intelligent Designer who imparts information directly into a biological system is required in order to circumvent the second law and to therefore give an adequate explanation of life. And number three, due to quantum non-locality, a beyond space and time cause must be appealed in order to explain the quantum information that is ubiquitous within life. In short, in any coherent explanation for life, and far as the empirical science of Quantum Information theory itself is concerned, God, Who is, by definition, beyond space and time, must ultimately be appealed to in order to give an adequate causal explanation of life. Of course, since this is empirical science instead of unrestrained imagination, don'r expect Darwinists to be forthcoming to these developments in science any time soon.bornagain77
September 3, 2019
September
09
Sep
3
03
2019
04:09 AM
4
04
09
AM
PDT
GP @49:
There is also a rationale in considering design as explanation for high FI. We can observe in human design that what seems to allow us to overcome the huge probabilistic barriers to high levels of FI is the simple fact that we are conscious, and as conscious beings we have the following two categories of subjective experiences: 1) The understanding of meanings 2) The feeling of purposes Some reflection will easily show that those two experiences cannot be described in purely objective terms: they are rooted in consciousness, in subjective representation. Some reflection will aslo easily show that it’s exactly the possibility of understanding meanings and having purposes that allows us to build machines, language and software. Against the probabilistic barriers that preclude the generation of that kind of results by probability alone. So, design is not only the only origin of high FI in the known world, it is also the only reasonable cause of that.
As clearly reasonable and convincing as this may sound, it’s not understood by many highly educated folks, like JS and AH at PS. Why? What keeps those intelligent folks from understanding such a rational explanation? Any clues?PeterA
September 3, 2019
September
09
Sep
3
03
2019
03:26 AM
3
03
26
AM
PDT
In Art Hunt's cited paper we find this claim,
The Origin of Prebiotic Information System in the Peptide/RNA World: A Simulation Model of the Evolution of Translation and the Genetic Code Excerpt: The origin of the genetic code is enigmatic; herein, we propose an evolutionary explanation: the demand for a wide range of protein enzymes over peptides in the prebiotic reactions was the main selective pressure for the origin of information-directed protein synthesis. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6463137/
And yet,
per Dr. Cornelius Hunter: The rugged and flat nature of the protein evolution fitness landscape comes from both theoretical and experimental considerations, and from native sequences as well as random sequences. Studies attempting to blindly evolve protein sequences from random sequences find that an astronomical number of starting points are needed to get close enough in order for selection to do the job: http://www.sciencedirect.com/science/article/pii/0022519377900443 http://www.ncbi.nlm.nih.gov/pubmed/2199970 http://www.ncbi.nlm.nih.gov/pubmed/15321723 http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0000096 Yet even absurdly optimistic studies show that evolution has nowhere near such astronomical resources: http://rsif.royalsocietypublishing.org/content/5/25/953.long And while a given protein may come in very different sequences, only a few percent of changes to that sequence can be sustained. See, for example: http://www.ncbi.nlm.nih.gov/pubmed/19765975 http://www.nature.com/nature/journal/v465/n7300/full/nature09105.html#B4 Protein evolution Excerpt: evolution predicts that proteins evolved when life first appeared, or not long after. But despite enormous research efforts the science clearly shows that such protein evolution is astronomically unlikely. One reason the evolution of proteins is so difficult is that most proteins are extremely specific designs in an otherwise rugged fitness landscape. This means it is difficult for natural selection to guide mutations toward the needed proteins. In fact, four different studies, done by different groups and using different methods, all report that roughly 10^70 evolutionary experiments would be needed to get close enough to a workable protein before natural selection could take over to refine the protein design. For instance, one study concluded that 10^63 attempts would be required for a relatively short protein. (Reidhaar-Olson) And a similar result (10^65 attempts required) was obtained by comparing protein sequences. (Yockey) Another study found that from 10^64 to 10^77 attempts are required (Axe) and another study concluded that 10^70 attempts would be required. (Hayashi) In that case the protein was only a part of a larger protein which otherwise was intact, thus making for an easier search. Furthermore these estimates are optimistic because the experiments searched only for single-function proteins whereas real proteins perform many functions. This conservative estimate of 10^70 attempts required to evolve a simple protein is astronomically larger than the number of attempts that are feasible. And explanations of how evolution could achieve a large number of searches, or somehow obviate this requirement, require the preexistence of proteins and so are circular. For example, one paper estimated that evolution could have made 10^43 such attempts. But the study assumed the entire history of the Earth is available, rather than the limited time window that evolution actually would have had. - Dr. Cornelius Hunter
In short, "Tawfik soberly recognizes the problem. The appearance of early protein families, he has remarked, is “something like close to a miracle.”45,,, “In fact, to our knowledge,” Tawfik and Tóth-Petróczy write, “no macromutations ... that gave birth to novel proteins have yet been identified.”69",, "The emerging picture, once luminous, has settled to gray. It is not clear how natural selection can operate in the origin of folds or active site architecture (of proteins). It is equally unclear how either micromutations or macromutations could repeatedly and reliably lead to large evolutionary transitions. What remains is a deep, tantalizing, perhaps immovable mystery."
Dan S. Tawfik Group - The New View of Proteins - Tyler Hampton - 2016 Excerpt: one of the most favorable and liberal estimates is by Jack Szostak: 1 in 10^11. 42 He ascertained this figure by looking to see how random sequences—about eighty amino acids in length, long enough to fold—could cling to the biologically crucial molecule adenosine triphosphate, or ATP. At first glance, this is an improvement over Salisbury’s calculations by 489 powers of ten. But while an issue has been addressed, the problem has only been deferred. ,,, ,,, nucleotide synthesis, requires several steps. If five enzyme functions were needed (ten are needed in modern adenine synthesis), 43 then the probability would be 1 in (10^11)5, or 1 in 10^55. If all the operations needed for a small autonomous biology were ten functions—this is before evolution can even start to help—the probability is 1 in (10^11)10, or 1 in 10^110. This is more than the number of seconds since the Big Bang, more protons than there are in the universe. In considering a similar figure derived in a different context, Tawfik concedes that if true, this would make “the emergence of sequences with function a highly improbable event, despite considerable redundancy (many sequences giving the same structure and function).”44 In other words, these odds are impossible.,,, Tawfik soberly recognizes the problem. The appearance of early protein families, he has remarked, is “something like close to a miracle.”45,,, “In fact, to our knowledge,” Tawfik and Tóth-Petróczy write, “no macromutations ... that gave birth to novel proteins have yet been identified.”69 The emerging picture, once luminous, has settled to gray. It is not clear how natural selection can operate in the origin of folds or active site architecture (of proteins). It is equally unclear how either micromutations or macromutations could repeatedly and reliably lead to large evolutionary transitions. What remains is a deep, tantalizing, perhaps immovable mystery. http://inference-review.com/article/the-new-view-of-proteins
bornagain77
September 3, 2019
September
09
Sep
3
03
2019
02:33 AM
2
02
33
AM
PDT
. #54 I am familiar with the objection (and the paper). I understand very well that Art Hunt, and others like him, are forced by their prior convictions to believe there was once an arrangement of matter on earth that not only began to replicate itself based solely on its dynamic properties (such as in the promise of an unknown self-replicating RNA), but it then somehow kept replicating dynamically as it also came to described itself symbolically in a semantically-closed self-referential organization. If your paradigm depends upon a continuum of function from the dynamic domain to the symbolic domain, then at some point in the continuum the system has to function in both domains. You can try to move the pieces around to give the appearance of a solution, but you can’t avoid it in earnest. In any case, the physical, chemical, and organizational problems with this position are legion. Among the scores of problems is the fact that (good grief, even granting a dynamic self-replicator) there still remains a sheer vertical face in the necessary organization of semantic closure (which not only requires the presence, but also the simultaneous coordination of multiple unrelated objects in the system). There are more insurmountable problems in the construction of a set of constraints from memory, as well as numerous other points along the way. I believe this is why materialists typically avoid the fundamental details already recorded in the literature, as well as the history of those discoveries. Thus, devoted materialist like Art Hunt (and his anti-design religious supporters like Joshua Swamidass) are stuck with any form of proposition (which is all the cited paper is) that might be used socially (argumentatively) to save their prior commitments from the empirical evidence as it is fully documented to be. There is a derogatory term used in science circles to describe the promotion of unsupported conclusions in lieu of actual documented physical evidence. That term temporarily escapes my mind, but others might recognize the situation. There is also the very real issue of that which has been denied. A multi-referent symbol system was famously predicted to be the core physical and organizational requirement of an open-ended self-replicator, like the living cell. That prediction was later confirmed by experiment. Since its confirmation, the physical requirements of that particular system have been exceedingly well-documented in the literature. It has also been well-documented that the only other such system known to science happens to be found only in human language and mathematics -- two unambiguous correlates of intelligence. The recorded history of these discoveries is already on the books, and is not going to change. In fact, there is a completely coherent chain of scientific understanding from persons like Peirce to Turing to von Neumann to Crick to Pattee that fundamentally supports this conclusion. This chain of understanding is not only clearly recognizable, but has been fondly acknowledged by distinguished members of the biological community. The simple fact of the matter is that design is positively on the table at the origin of the symbol system and constraints that enabled the specification of the first living cell, as well as the subsequent diversity of all life on this planet. Otherwise well-meaning researchers like Art Hunt are perfectly free to hold their (unsupported) paradigms if they wish; it does not change the recorded physical evidence in any way whatsoever. Likewise, religious propaganda sites like BioLogos and Peaceful Science are free to lead and mislead whomever wishes to be led. All anyone can do is point out the recorded facts for those who are genuinely seeking to know the record.Upright BiPed
September 2, 2019
September
09
Sep
2
02
2019
10:20 PM
10
10
20
PM
PDT
GP @49:
It is really irrational to believe that, among the necessity laws of biochemistry that govern mutations at the level of nucleotides in the DNA, there may be any law that favors the sequences that code for a functional proteins when translated according to the genetic code.
That’s a fundamental concept that must be clear to all.PeterA
September 2, 2019
September
09
Sep
2
02
2019
07:45 PM
7
07
45
PM
PDT
EricMH @ 48, > Further, is intelligence itself a stochastic process, or something else? If something else, what is a non-stochastic process? I’ve studied probability, comp. sci., information theory and the like quite a bit, and I’ve not seen a non-stochastic process defined. Intelligence, like consciousness might be something beyond the stochastic/deterministic dichotomy. I don't think we can eliminate all the "chance" possibilities, but until we discover new types, our best inferences will have to operate by eliminating the chance possibilities we do know about.EDTA
September 2, 2019
September
09
Sep
2
02
2019
07:11 PM
7
07
11
PM
PDT
For Upright Biped, et al- from Art Hunt: The Origin of Prebiotic Information System in the Peptide/RNA World: A Simulation Model of the Evolution of Translation and the Genetic Code. They have an admittedly "sketchy and speculative" narrative but with it the scientific testing can begin. I remember when the paper Self-Sustained Replication of an RNA Enzyme was published. Neither lead author thought that nature could produce one of their RNA's let alone the required two. And those two could only bond two other engineered RNAs. The point is the two working RNAs were made up of only 35 nucleotides. Even if nature could get to those it cannot get you beyond that. So we will see...ET
September 2, 2019
September
09
Sep
2
02
2019
01:56 PM
1
01
56
PM
PDT
. If variation and selection are (among) the mechanisms of evolution, then semiosis (the specification of something among alternatives) is the mechanism of design. Indeed, evolution by selection requires symbolic memory tokens, a coordinated set of interpretive constraints, and a multi-referent code - just as it was predicted. After all, it is the specification of something that is selected. Materialists are free to rock back and forth in their chairs and kick their feet all they wish; it won't change the documented physical reality (or recorded intellectual history of the issue). Demanding to know whether the designer was right or left-handed is merely a ploy to dismiss the positive inference, and can be seen for what it is.Upright BiPed
September 2, 2019
September
09
Sep
2
02
2019
12:32 PM
12
12
32
PM
PDT
EricMH:
If we can never be sure we account for all chance hypotheses, then how can we be sure we do not err when making the design inference?
That is the nature of science. It is tentative.
And even if absolute certainty is not our goal, but only probability, how can we be confident in the probability we derive?
We do the best we can with the knowledge we have. The science of today does not, will not and cannot wait for what the science of tomorrow may or may not discover. We should be able to test the claim with respect to nature's ability to invent/ produce biologically relevant replicators. And when that doesn't work scientists have designed them. And guess what they discovered? Spiegelman's Monster. Your starting population isn't going to get any more complex with respect to functions and length/ number of nucleotides. The fastest replicators always win out. And the fastest replicators are always the simplest. They do that one thing, they do it efficiently and they do it faster. That is how it works with nature. The line of least resistance. That is because it doesn't want nor try to do otherwise. Nature allows things to just happen. It definitely isn't going to build a coded system from the ground up, having the code emerge from the system and its components. So if people aren't convinced by the semiotic argument for Intelligent Design with respect to biology, it isn't for scientific and evidentiary reasons. We need them to step forward with the methodology they use so we can compare. How did they determine NS, drift, CNE or some other materialistic process did it? Another example is with any bacterial flagella. It isn't just about getting the right proteins. You need the proper number of subunits. Missing ONE protein means you are missing more than one part. You are missing a chunk of machinery. People want to use the type three secretory system as some sort of evolutionary link? Where did they get that structure from? Never mind the fact that is would take an engineering wonder to pull of the structural and functional changes required. If someone can demonstrate it somehow all just happened to come together, a Nobel Prize definitely awaits them. They will have more fame then any other human ever. And that is why people are trying to do just that. Their successes only expose the huge problems they face.ET
September 2, 2019
September
09
Sep
2
02
2019
12:29 PM
12
12
29
PM
PDT
Hazel, I'm glad you enjoyed the post. In principle, the designer does not have to be omniscient and omnipotent in order to play with genetics and invent new life forms. After all, our bio-scientists are almost able to do that already, and they are far from omni-anything. Of course God (as the possible designer) could have done it any way he chose, and our study of potential ID mechanisms and procedures are perhaps one way of "looking up God's pant leg" as Einstein famously put it. As an engineer myself, however, I like to think of God as the divine Engineer, tinkering around in his creation to see what he can come up with. Most engineers like getting some "hands on" time in the lab or shop from time to time, or to play around with prototypes and design changes themselves. And although a divine lab and angelic assistants may not be needed by God, who are we to say how he should or should not do whatever he wants to do? As for creative purposes, perhaps one of God's purposes is simply to create many and varied lifeforms, within the constraints of his own creation, just for the fun of it. Engineers, and presumably God, can play with stuff just for fun. E.g. an engineer can constrain himself to using LEGO pieces to make some really cool things, even though he could make similar things by other means. Just speculating here... As mentioned in the post, "common descent" is indeed possible, but not necessary in this scenario. It does put a new spin on "descent with modification", however. A Darwinist might look at the description you quoted in 28 and say that it looks just like Darwinism in action: many small steps, each with small changes. However, each of the small steps would still require major genomic changes, such as new genes, new expression controls, and/or carefully modified development plans - aspects beyond the capability of unguided Darwinian processes. Hence ID may look like what Darwinists want natural selection to look like.Fasteddious
September 2, 2019
September
09
Sep
2
02
2019
11:32 AM
11
11
32
AM
PDT
We don't have to eliminate them all. Just the ones we know of. Science is not about proving something. It is about coming to a reasoned inference based on our current knowledge. And a double-headed coin would be a sign of intelligent design. We don't just eliminate nature, operating freely. Dr. Behe gave us the positive criteria:
"Our ability to be confident of the design of the cilium or intracellular transport rests on the same principles to be confident of the design of anything: the ordering of separate components to achieve an identifiable function that depends sharply on the components.”
THAT is how we justify the design inference.ET
September 2, 2019
September
09
Sep
2
02
2019
11:14 AM
11
11
14
AM
PDT
EricMH at #48: Good questions. Regarding the coin example, I would say that the possible explanation "coin with two heads", or even simply a severely unfair coin, is simply a necessity component which changes the probability distribution of the system, favoring one outcome (head), in the case of an unfair coin, or even transforming the system in a necessity system without any random component (a coin with two heads). However, that would be already included in the classical idea that we have to check for known necessity explanations (see Dembski's explanatory filter) before inferring design. It is interesting that the problem of excluding necessity influences is important for specifications linked to order (the series of all heads), but is loses much of its relevance (maybe all of it) whne we are discussing funcional information. It is really irrational to believe that, among the necessity laws of biochemistry that govern mytations at the level of nucleotides in the DNA, there may be any law that favors the sequences that code for a functional proteins when translated according to the genetic code. Such a necessity theory would be only an accumulation of impossibilities. So, functional information makes the design inference so much safer. The only necessity component which has been invoked to "explain" FI in biological objects is NS, which is not a law, but rather a complex process. But, for various reasons, that I have discussed elsewhere, it completely fails. There is also a rationale in considering design as explanation for high FI. We can observe in human design that what seems to allow us to overcome the huge probabilistic barriers to high levels of FI is the simple fact that we are conscious, and as conscious beings we have the following two categories of subjective experiences: 1) The understanding of meanings 2) The feeling of purposes Some reflection will easily show that those two experiences cannot be described in purely objective terms: they are rooted in consciousness, in subjective representation. Some reflection will aslo easily show that it's exactly the possibility if understanding meanings and having purposes that allows us to build machines, language and software. Against the probabilistic barriers that preclude the generation of that kind of results by probability alone. So, design is not only the only origin of high FI in the known world, it is also the only reasonable cause of that. That said, I essentially agree that ID theory should try to face the aspects of the "how", of the mechanism. I have many times expressed some general ideas about that. However, it remains perfectly true that the design inference itself does not need that. But it is a duty of any scientific approach to deal with all aspects of reality that can be in some measure investigated according to available facts.gpuccio
September 2, 2019
September
09
Sep
2
02
2019
10:48 AM
10
10
48
AM
PDT
@ET I think Bob O'H makes an important point. The problem is what do we mean by 'chance and necessity'? Is that all possible stochastic processes, or a particular subset? If the former, how can we eliminate all of them? If the latter, how do we know another subset is not responsible? Further, is intelligence itself a stochastic process, or something else? If something else, what is a non-stochastic process? I've studied probability, comp. sci., information theory and the like quite a bit, and I've not seen a non-stochastic process defined. I think the ID movement does need to do a better job defining exactly what is meant by 'intelligence'. Because merely calculating the CSI does show that some other process might be a better explanation of the event under question, but I'm not sure it is clearly a design inference. If we take the standard example and have a run of 100 heads in a series of coinflips, we can easily eliminate the uniform random hypothesis. But, what allows us to make the design inference? And, let's say we do make the design inference, but it turns out the run of heads was due to the coin having heads on both sides. The design inference is supposed to guarantee true positives, so why did it fail in this instance? Is it only because we didn't adequately account for all the possible chance hypotheses, i.e. that the coin had heads on both sides? But if that is the problem, then in the scenario of a natural event that exhibits positive CSI, such as a bacterial flagellum, why can we be confident that we've not made a false positive error when inferring design, if we are not sure we have accounted for all possible chance hypotheses? So, in summary, after my years of personal research, I am confident that the theory of ID and CSI calculations is sound, but the rub is in actually filling in the details of application. If we can never be sure we account for all chance hypotheses, then how can we be sure we do not err when making the design inference? And even if absolute certainty is not our goal, but only probability, how can we be confident in the probability we derive? Thus, while the critics are wrong in claiming ID research cannot proceed without a positive account of intelligent design, such an account seems very important to making a coherent design inference, and being able to justify confidence in doing so. Consequently, on the ID side, we cannot wave our hands and claim design has been inferred in the biological sciences just because we've eliminated a particular chance hypothesis (i.e. Darwinism). We need to better articulate and justify the inference to design. I think that "it looks like a humanly intelligently designed artifact, therefore it is intelligently designed" is actually a pretty decent step, but it needs to be spelled out a bit more why that step makes sense, as JohnnyB has done in quite a few publications at this point. The Mind Matters blog is also a good move in this direction, but it too needs more of a positive account of what intelligence is, vs the (useful) negative analysis of why artificial intelligence is not like human intelligence.EricMH
September 2, 2019
September
09
Sep
2
02
2019
10:01 AM
10
10
01
AM
PDT
Brother Brian:
But if it is encrypted we don’t know whether it contains any meaningful information unless we have a good knowledge of encryption techniques.
But we would still know an intelligent agency id it, ie it was designed.
We know that it is composed of 20+ proteins, almost all of which are found elsewhere, serving other functions.
An we know that bacteria do not go on shopping sprees. We also know that the protein subunits are expressed in different quantities.
Looking at it from evolution perspective we don’t know the specific evolutionary steps that led to it, but we do have examples that serve another function but only differ by a couple proteins.
Looking at it from an Intelligent Design perspective we don’t know the specific design steps that led to it, but we do have examples that serve another function but only differ by a couple proteins.
We have mechanism for increasing genetic and phenotypic variation in a population.
As does Intelligent Design.
We have mechanisms for differential reproduction that result in some combinations of genes/proteins becoming ubiquitous in the population.
As does Intelligent Design.
We have evidence of proteins that did not exist before arising in a population.
And Intelligent Design offers the only viable explanation.
You could argue that this was due to design but until you come up with a mechanism for this occurring, we fall back on mechanisms that we know exist.
Pure equivocation. For all we know those mechanisms are design mechanisms. The problem is people like Brian have absolutely no idea what ID is an no idea what mainstream evolutionary thought entails.ET
September 2, 2019
September
09
Sep
2
02
2019
09:07 AM
9
09
07
AM
PDT
EDTA
But even a text message that we could never decrypt would still show evidence of design because the letters on the paper (or bits travelling over a wire) did not happen by chance.
I agree. But we know this because we recognize the way we write our languages. But if it is encrypted we don’t know whether it contains any meaningful information unless we have a good knowledge of encryption techniques. In short, to confirm design of the message we need knowledge about the language being used, or encryption techniques if it is encrypted. Let’s us the flagellum as an example. We know that it is composed of 20+ proteins, almost all of which are found elsewhere, serving other functions. Looking at it from evolution perspective we don’t know the specific evolutionary steps that led to it, but we do have examples that serve another function but only differ by a couple proteins. We have mechanism for increasing genetic and phenotypic variation in a population. We have mechanisms for differential reproduction that result in some combinations of genes/proteins becoming ubiquitous in the population. We have evidence of proteins that did not exist before arising in a population. You could argue that this was due to design but until you come up with a mechanism for this occurring, we fall back on mechanisms that we know exist.Brother Brian
September 2, 2019
September
09
Sep
2
02
2019
08:46 AM
8
08
46
AM
PDT
hazel- a list off the top of my head and no where near finished: 1- Purpose- under ID there is a purpose, besides happiness and leading a good life, to our existence. I would think that would be the most important question to answer. It should get the world on the same page. And if not at least then we would know who the hopeless resource wasters are. 2- How to properly maintain the design. Using our own design insights to solve biological issues. For example I would think it would be helpful to know that certain pathogens were the product of a design gone awry. Compare to the "good" microbes and see if we can engineer a solution. Cancer- same idea. See if we can infuse the cancerous cells with synthesized functional information to correct the genetic defects. 3- Our understanding of what makes an organism what it is would no longer be focused on the genomes, genotypes and epigenetic effects. So we should be able to better solve that mystery. That would go a long way into understanding our existence. 4- The junk DNA argument would be switched to "what is the function of that DNA? Is it similar RAM and EEPROMs as carriers and holders of (immaterial) information?" 5- And in an ID scenario the odds of other civilizations out there increases to 1. So where are they and have they been here, would be important questions. Can they help? Will they help? Have they helped? By getting through those we may gain a better understanding of the how and who.ET
September 2, 2019
September
09
Sep
2
02
2019
08:25 AM
8
08
25
AM
PDT
ET, what are some of the "more important questions" you refer to in 41?hazel
September 2, 2019
September
09
Sep
2
02
2019
08:00 AM
8
08
00
AM
PDT
Brother Brian:
It’s nice to see that ET agrees that you can’t confirm design unless you have some knowledge and evidence of how the design was implemented
That doesn't follow from what I said. Clearly you are just a desperate troll. I said that you cannot decrypt an encrypted message without the knowledge of how it was encrypted along with the key. That has nothing with determining design. If we intercepted a seemingly random sequence of letters we would know that an intelligent agency sent it because nature is incapable.ET
September 2, 2019
September
09
Sep
2
02
2019
07:53 AM
7
07
53
AM
PDT
ET
Not true. You not only need that knowledge but also then key. The Brits knew how enigma worked but it wasn’t until they found a key did they start decrypting messages
It’s nice to see that ET agrees that you can’t confirm design unless you have some knowledge and evidence of how the design was implementedBrother Brian
September 2, 2019
September
09
Sep
2
02
2019
07:34 AM
7
07
34
AM
PDT
hazel:
My point is that once one has accepted design, the question of what happens in the world when it is implemented becomes the immediate next question.
There are other immediate questions too, hazel. More important questions.
Even if one accepts a possible metaphysical designer, there still is the question of the interface between it and the physical world: if one is empirically observing the physical world, what events actually happen as that implementation is instantiated?
We may never know because clearly it is over and above our capabilities and understanding. So that will take some time to figure out. But in the meantime there are more important questions to answer.ET
September 2, 2019
September
09
Sep
2
02
2019
06:00 AM
6
06
00
AM
PDT
Bob O'H:
What CSI calculates is the probability that a sequence would fall in the specification if the sequence were drawn randomly.
Not so. It eliminates both chance and necessity.
Thus, he can reject total randomness, but not other mechanisms
Like what, Bob? What other mechanism could possibly put a coherent message on paper or an internet forum?
It’s a large leap from “it’s not mechanism A” to “it is mechanism B” when you can’t even specify how mechanism B works, and you ignore any other possible mechanisms.
What other mechanisms, Bob? Why are you too afraid to say?ET
September 2, 2019
September
09
Sep
2
02
2019
05:56 AM
5
05
56
AM
PDT
Bob, is the objection to Johnny’s conclusion valid?
No it isn't. What CSI calculates is the probability that a sequence would fall in the specification if the sequence were drawn randomly. Thus, he can reject total randomness, but not other mechanisms (e.g. the CSI calculation assumes independence, so depending on the specification, a Markov process could look like design). It's a large leap from "it's not mechanism A" to "it is mechanism B" when you can't even specify how mechanism B works, and you ignore any other possible mechanisms.Bob O'H
September 2, 2019
September
09
Sep
2
02
2019
05:36 AM
5
05
36
AM
PDT
BA, as I pointed out, the history of science shows that the objection is not valid. Blackbody/cavity radiation had a spectral pattern that resisted being accounted for on classical analysis. Thus the "ultraviolet catastrophe" and so something seemed wrong even without an alternative. Planck used the lumps of energy model and once radiation came in lumps proportional to frequency, he could fit the observed curve. A few years later Einstein expanded to explaining the photoeffect. Quanta were on the table and were not going away. And yet it took a generation of the top flight practitioners for some sort of theory to be hammered out, the notorious Copenhagen interpretation; many others have been put up since, but all turn on the quantum principle and several other closely linked, empirically grounded ideas such as wave-particle duality, uncertainty etc. Comparing, OoL and Oo body plans face an information (and language!) origin catastrophe. There is a trillion member observation base that establishes the reliable cause of such FSCO/I; intelligently directed configuration. It is backed up by analysis of blind search challenge in configuration spaces of at least 500 - 1,000 bits. Moreover, agent causation by intelligent direction of configuration is a routinely carried out process as OP shows; this gives a how though it is not blindly mechanical, agent action is the basis of logic and mathematics as well as the world of technology around us. Further, as I noted recently, we see molecular nanotech labs synthesising or engineering genomes and we see how molecules are designed and built by chemists, e.g. the molecular car. The problem is not the observations, the history, the reasoning, having a known effective means. No, it is ideological imposition on and straight-jacketing of science and society by a worldview long past bury by date: evolutionary materialistic scientism, aka naturalism. KFkairosfocus
September 2, 2019
September
09
Sep
2
02
2019
03:00 AM
3
03
00
AM
PDT
Darwin Devolves ? https://gsejournal.biomedcentral.com/articles/10.1186/s12711-019-0458-6PeterA
September 2, 2019
September
09
Sep
2
02
2019
01:40 AM
1
01
40
AM
PDT
1 3 4 5 6 7

Leave a Reply