Uncommon Descent Serving The Intelligent Design Community

Who thinks Introduction to Evolutionary Informatics should be on your summer reading list?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Return to product information Robert Marks sends these endorsements for Evolutionary Informatics:

(Note: It is surprisingly easy to read.)

···············
“An honest attempt to discuss what few people seem to realize is an important problem. Thought provoking!”

Gregory Chaitin, Ph.D.
Professor, Federal University of Rio de Janeiro
Eponyms: Kolmogorov-Chaitin-Solomonov Information Theory
Chaitin’s Number
Chaitin’s algorithm
Author of:The Unknowable
Meta Math!: The Quest for Omega
The Limits of Mathematics
Thinking about Gödel and Turing: Essays on Complexity
Algorithmic Information Theory.

···············

“Darwinian pretensions notwithstanding, Marks, Dembski, and Ewert demonstrate rigorously and humorously that no unintelligent process can account for the wonders of life.”

Michael J. Behe, Ph.D.
Professor of Biological Sciences , Lehigh University
Author of: Darwin’s Black Box
The Edge of Evolution

···············

“This is a fine summary of an extremely interesting body of work. It is clear, well-organized, and mathematically sophisticated without being tedious (so many books of this sort have it the other way around). It should be read with profit by biologists, computer scientists, and philosophers.”

David Berlinski, Ph.D.
Author of: The Devil’s Delusion, The Deniable Darwin and Other Essays, The King of Infinite Space: Euclid and His Elements

···············

“For decades and decades, the ubiquitous cultural lie is that Intelligent Design advocates do nothing but rehash old criticisms of evolutionary theory. They never present fresh, positive research that supports ID theory. Now repeating serious criticisms of evolution is very important, especially since the universities, state school boards, and the ACLU have guaranteed that students must never hear of the problems with evolutionary theory. Still, the ID movement must present positive research for its views, and since this has been done for years through a number of publications, it is now a sign of ignorance, intellectual bigotry and bad faith for people to perpetuate this cultural lie. It is itself a lie. But with the publication of the ground-breaking book, Introduction to Evolutionary Informatics, there is now a cutting-edge positive ID research volume that does fresh, heretofore unpublished (and un-thought of!!) ideas that get to the very deepest bottom of recent science that is not only relevant to the ID/Evolution debate, but actually devastates evolutionary theory at the ground floor. In my view, no one reading this book can continue to adopt Theistic Evolution on philosophical and scientific grounds alone. This is must reading for all believers and unbelievers interested in the debate, and Christians who are scientists have, I believe, a moral and spiritual duty to read this book. Though somewhat difficult, Marks, Dembski and Ewert have done a masterful job of making the book accessible to the engaged and thoughtful layperson. I could not endorse this book more highly.”

J.P. Moreland, Ph.D.
Distinguished Professor of Philosophy, Biola University,
Author of: The Soul: How We Know It’s Real and Why It Matters

···············

“With penetrating brilliance, and with a masterful exercise of pedagogy and wit, the authors take on Chaitin’s challenge, that Darwin’s theory should be subjectable to a mathematical assessment and either pass or fail. Surveying over seven decades of development in algorithmics and information theory, they make a compelling case that it fails.”

Bijan Nemati, Ph.D.
Jet Propulsion Laboratory
California Institute of Technology

···············

“Dr. Marks has been at the forefront of research on evolutionary algorithms for three decades. However, in 2007 his university removed the website of his Evolutionary Informatics group because his research was a threat to the status quo in evolutionary biology. Nonetheless, Dr. Marks and his colleagues continued to pursue research into the informational requirements of evolutionary algorithms, the result of which is found in this volume. If you want to know what information theory says about evolution, this is the volume to read.”

Jonathan Bartlett, Director
The Blyth Institute
Author Programing from the Ground Up
Building Scalable Web Applications Using the Cloud
Coeditor Engineering and the Ultimate: An Interdisciplinary Investigation of Order and Design in Nature and Craft
Naturalism and Its Alternatives in Scientific Methodologies

···············

“Introduction to Evolutionary Informatics is a lucid, entertaining, even witty discussion of important themes in evolutionary computation, relating them to information theory. It’s far more than that, however. It is an assessment of how things might have come to be the way they are, applying an appropriate scientific skepticism to the hypothesis that random processes can explain many observed phenomena. Thus the book is appropriate for the expert and non-expert alike.”

Donald Wunsch, Ph.D.
Mary K. Finley Missouri Distinguished Professor
Director of the Applied Computational Intelligence Lab
Missouri University of Science & Technology
IEEE Fellow, INNS Fellow
Past President of the International Neural Networks Society
Coauthor of Neural Networks and Micromechanics
Unified Computational Intelligence for Complex Systems Clustering

···············

“Evolution requires the origin of new information. In this book, information experts Bob Marks, Bill Dembski, and Winston Ewert provide a comprehensive introduction to the models underlying evolution and the science of design. The authors demonstrate clearly that all evolutionary models rely implicitly on information that comes from intelligent design, and that unguided evolution cannot deliver what its promoters advertise. Though mathematically rigorous, the book is written primarily for non-mathematicians. I recommend it highly.”

Jonathan Wells, Ph.D. Ph.D.
Senior Fellow, Discovery Institute
Author of: Zombie Science,
Icons of Evolution
The Myth of Junk DNA

···············

“When biologists finally come to terms with the fact that Darwinism was a long experiment in collective self-deception, the work described in this book will deserve much of the credit for putting things right.”

Douglas Axe, Ph.D.
Director of Biologic Institute
Author of Undeniable: How Biology Confirms Our Intuition That Life Is.
Coauthor of Science and Human Origins

···············

“Introduction to Evolutionary Informatics helps the non-expert reader grapple with a fundamental problem in science today: We cannot model information in the same way as we model matter and energy because there is no relationship between the metrics. As a result, much effort goes into attempting to explain information (and intelligence) away. The authors show, using clear and simple illustrations, why that approach not only does not work but cannot work. It impedes understanding of our universe. The picture that emerges from their work is of a universe that is at the same time more mysterious than we had been led to expect and more familiar.”

Denyse O’Leary, Science Writer.
Author/Coauthor of:
The Spiritual Brain: A Neuroscientist’s Case for the Existence of the Soul
By Design Or By Chance?: The Growing Controversy On The Origins Of Life In The Universe

···············
“Marks, Dembski, and Ewert have written a book summarizing in a very accesible way all of their research at the Evolutionary Informatics Lab for the last decade. If the blind watchmaker says “me thinks it is like a weasel”, they say “perhaps, but in order to see it you need these active-information glasses.” When the watchmaker is able to see with the glasses (and he needs them to be certain it is a weasel), he is not blind anymore. He is, like the programmer of an evolutionary algorithm, an intelligent designer with a very clear sight of his target. —‘Oh, yes, it was a weasel!’ “

Daniel Andrés Díaz Pachón, Ph.D.
Research Assistant Professor, Biostatistics, University of Miami

···············

“This is an important and much needed step forward in making powerful concepts available at an accessible level.”

Ide Trotter, Ph.D.
Trotter Capital Management Inc.
Founder:Trotter Prize & Endowed Lecture Series on Information, Complexity and Inference (Texas A&M)

···············

“Steampunk fiction anachronistically fuses Victorian steam powered technology into the digital age. Darwinism is ‘steampunk science.’ It is an analog-based Victorian relic trying to make its way in the digital information age. Darwin had no conception of the information problem facing any account of naturalistic evolution. Darwin’s 21st century successors certainly know about the problem, but as Marks, Dembski and Ewert demonstrate in Introduction to Evolutionary Informatics, in 2017 they are no closer to solving the problem than Darwin was in 1859. This lay-accessible introduction to the information issue and how it remains unsolved is absolutely essential to anyone who wants to understand how all life is fundamentally information-based, and how naturalistic evolutionary science has not come remotely close to solving the problem of how meaningful information can arise in the absence of intelligence.”

Barry Arrington, D.Jur.
Colorado House of Representatives (1997-1998)
Editor-in-Chief, UncommonDescent.com

···············

“One of the things Intelligent Design theorists do is take what is obvious to the layman, that unintelligent forces cannot do intelligent things, and state it in more rigorous, scientific terms, so that highly educated people can understand also. This book makes important contributions to that effort, using results and terminology from information theory.”

Granville Sewell, Ph.D.
Professor of Mathematics, University of Texas, El Paso
Author of: Computational Methods of Linear Algebra
In the Beginning: And Other Essays on Intelligent Design
Christianity for Doubters

···············

“A very helpful book on this important issue of information, which evolution cannot explain. Information is the jewel of all science and engineering which is assumed but barely recognised in working systems. In this book Marks, Dembski and Ewert show the major principles in understanding what information is and show that it is always associated with design.”

Andy C. McIntosh DSc, FIMA, C.Math, FEI, C.Eng, FInstP, MIGEM, FRAeS.
Visiting Professor of Thermodynamics, School of Chemical and Process Engineering, University of Leeds, LEEDS, UK. Adjunct Professor, Department of Agricultural and Biological Engineering. Mississippi State University, Starkville, Mississippi, USA

···············

People who don’t like the book still won’t.

But see also: Information theory is bad news for Darwin: Evolutionary informatics takes off

Comments
I started a thread at "The Skeptical Zone": Introduction to Evolutionary Informatics DiEb
PS: I guess such a μ_s would have to exist trivially, just normalize the P_s function. daveS
DiEb, Thanks for the reference and the further explanation. If I understand #68 correctly, this shows that for some searches, there is no suitable distribution μ_s that completely characterizes the function P_s which gives the probabilities of locating the targets. Will there always exist a distribution μ_s and a constant of proportionality k such that P_s(T) = k*μ_s(T)? It seems that k = 10 might work in the example from #68, but I don't know if that holds in general. daveS
PS: That lot of background work required to run on a machine includes the biological case that requires metabolic, self replicating machines capable of reading and acting on strings of genetic information. Just ponder protein synthesis if that is too vague or generic for you. And for representations try a von Neumann Kinematic Self Replicator with an integrated fabrication facility. kairosfocus
F/N: On search in an "evolutionary" context, this from Talib S. Hussain may provide convenient context, as there have been attempts to cloud terms like search:
Researchers in many fields are faced with computational problems in which a great number of solutions are possible and finding an optimal or even a sufficiently good one is difficult. A variety of search techniques have been developed for exploring such problem spaces, and a promising approach has been the use of algorithms based upon the principles of natural evolution . . . . In a search algorithm, a number of possible solutions to a problem are available and the task is to find the best solution possible in a fixed amount of time. For a search space with only a small number of possible solutions, all the solutions can be examined in a reasonable amount of time and the optimal one found. This exhaustive search, however, quickly becomes impractical as the search space grows in size. Traditional search algorithms randomly sample (e.g., random walk) or heuristically sample (e.g., gradient descent) the search space one solution at a time in the hopes of finding the optimal solution. The key aspect distinguishing an evolutionary search algorithm from such traditional algorithms is that it is population-based. Through the adaptation of successive generations of a large number of individuals, an evolutionary algorithm performs an efficient directed search.
Of course, the dominant problem is that of the population of possible solutions to functional challenges, the vast majority are utterly non-functional and functional cases come in deeply isolated clusters in the space of possibilities. That is, the inherent architecture of the problem is directly analogous to the task of a discoverer with very limited resources seeking out islands in a vast ocean, without a map or guides to the islands. As a result, any approach that turns on blindly picking cases from the space and then hoping to cross breed good or relatively good performers then do a sample of a new generation of cases to repeat runs into the problem of almost certainly fruitlessly exhausting search resources without ever landing on a shoreline of function. Where, Islands of function exist as the right components, correctly oriented and arranged then coupled -- this can be represented in bit strings in some description language -- must be in place with only limited tolerance, for function to emerge. On having a reasonably efficient description language [say, AutoCAD as a yardstick], the set of possibilities for 500 - 1,000 bits is 3.27*10^500 to 1,07*10^301. The first vastly exceeds search capability of the sol system, the second, that of the observed cosmos. So, we instantly see that evolutionary algorithms critically depend for success on careful and information-rich fine tuning. Fine tuning that puts the initial population in a favourable context where some function at least emerges so that improved cases can be rewarded by cross-breeding and then contributing to the next generation of cases, leading to a hoped for pattern of improved performance. Which of course has to be measured or observed in the computer but is analogous to survival and reproductive success of the fittest; never mind issues on circularity that often obtain. This involves what Dembski et al term active information. But in fact there is a lot more of such work going on in the background to get the schemes to work on a given machine. In short, the FSCO/I problem is real and is critical, regardless of real or imaginary flaws in any particular analysis. Unless this is acknowledged to be a key challenge, the debates we see pivot on begging the dominant question. KF kairosfocus
DiEb, you have repeatedly tried to establish a perception of irrelevance by dismissive remarks. I have responded on substance showing that just the opposite is the case, kindly note 41 ff above especially and the remarks I made overnight when you went back to the irrelevancy claims yet again. Also, note the key remark cited at 63 above, do you have a reasonable, empirically warranted response that shows otherwise? Never mind whatever flaws you may find or think you find, is this substantial point groundless, on what basis of evidence that we observe -- key word -- blind search to create copious functional information and systems that can profitably employ it? Failing at this point, critiques will boil down to straining out gnats while swallowing camels. In short, the issue is due balance that reckons with the main issue, rather than trying to nip at heels and creating a false impression that this is of no consequence, there is no real problem here. The problem of origin of FSCO/I by blind search is a serious challenge, one that is at the heart of OOL and origin of body plan claims through the evolutionary materialistic school of thought. The evidence in hand is, the only actually empirically warranted cause of such FSCO/I is design, to the point that it is a good sign of design as cause. That is what needs to be addressed squarely. But for years, I find that objectors will do almost anything but this. KF kairosfocus
@KF: "Per fair comment, my remarks have been shown to be relevant to the issues at stake, to the wider corpus of work by Dembski et al, and to the specific content of the book under discussion as descried by one of its authors." Fair comment? What are you talking about? I'm really lost... DiEb
@daveS: If you have to guess a number between 1 and 100, and you may guess 10 times, then - in the language of Wolpert and Macready - this is akin to optimizing the characteristic function of the number. The set of characteristic functions of single elements of the set {1,...,100} is closed under permutation, so the FLT can be applied. The result: In this case, no strategy is on average over all targets better than 10 random guesses (without repetition) - in 10% of the searches you are successful. Now, DEM say that a search " induces a probability distribution μ_s on Ω that entirely characterizes the probability of S successfully locating the target T" If I take the average about over the targets, I get .01 * Σ_{T \in Ω} μ(T) = .01 = 1% That is surprising! BTW: the definition of a search in their earlier paper "The Search for a Search: Measuring the Information Cost of Higher Level Search" leads to the 10% figure, too - it just has other problems.... DiEb
F/N: On search, here are my remarks above in reply to questions, which appear to have been ignored in the repetitive demand for definition of search -- something that seems strange to me given the context of Darwinian evolution and its long history of computer simulation using searches across a space filtered by some sort of quality function, typically a "fitness function":
17 kairosfocusJuly 5, 2017 at 9:00 am DiEb, search is in effect a synonym for sampling, from flat random to extremely biased. the point is, search and space and in many cases initial point — think random walk — have to be well matched for success such as high function based on configuration to emerge. Search is thus inherently a complex topic, and to pretend that oh it can be made simplistic, is to fail to address reality. Further, search on material resources, i.e. search with resource and temporal costs [and monetary ones often] is constrained by that reality, so that scope of space to be searched is a highly relevant issue. I am sure you have seen decision analysis on whether anticipated cost of further investigation is likely to be worthwhile on reward likely to be obtained. This is a part of the bounded rationality problem in decision-making. One of the constraints here is likely scope of space of possibilities and available resources to search, esp if the search is by blind chance and/or mechanical necessity. A space of 4 bits is readily exhaustively searched, one of 500 bits is infeasible on the gamut of the sol system, and 1,000 on the gamut of the observed cosmos. And this is directly relevant to search for viable configs for origin of cell based life in a Darwin warm pond or the like. Boiled down, as a search is a sample, a search for a golden search comes from the power set of the original space, which is exponentially harder. So, direct first level search is the practical upper limit. On this, when we deal with complex configuration based function, blind search is maximally unlikely to access relevant function, but we know — as your comment illustrates — that design routinely and rapidly generates solutions. So, we can readlily see how a complexity threshold of 500 – 1,000 bits becomes a good test for design as most credible causal explanation. Indeed, of trillions of observed cases of such FSCO/I, never has there been an observation of such by blind chance and mechanical necessity. We routinely produce results by design that exceed the limit. So, no this is not something to be lightly dismissed. KF 21 kairosfocusJuly 5, 2017 at 10:06 am Kindly, explain what is wrong or irrelevant with: “search is in effect a synonym for sampling, from flat random to extremely biased. the point is, search and space and in many cases initial point — think random walk — have to be well matched for success such as high function based on configuration to emerge. Search is thus inherently a complex topic . . . ” KF PS: Search: Se_x = Sa_x (Space_y) | start place_z –> Success filter. Where, a particular search and a particular space must be given, also initial point, which can be forced or itself a prior random or intentional choice. [and more . . . ]
Per fair comment, my remarks have been shown to be relevant to the issues at stake, to the wider corpus of work by Dembski et al, and to the specific content of the book under discussion as descried by one of its authors. That SHOULD be enough for a reasonable discussion. KF PS: Perhaps terms like: Configuration space: https://en.wikipedia.org/wiki/Configuration_space State Space: https://en.wikipedia.org/wiki/State_space_(physics) or Phase Space: https://en.wikipedia.org/wiki/Phase_space are unfamiliar, so these should be helpful first level links, noting of course the limitations of that site. For instance, state space is commonly used for control systems to denote an abstract space that defines state. Configuration space is in effect synonymous. Microstate and macrostate are terms in statistical thermodynamics. Gibbs and Boltzmann are key pioneers. kairosfocus
DiEb, with all due respect, I think you need to go look in a mirror, starting with the manner of your reply at 16 above and the pattern you have sustained since. Especially, in the aftermath of my highlighted and annotated citation from Marks at 41 above. KF PS: On the latest tangent on probability distributions, I think you will see a reasonable summary of why I spoke as I did here, http://www.itl.nist.gov/div898/handbook/eda/section3/eda361.htm Particularly note the generic sense involved, antecedent to particular models and functions. kairosfocus
PPS: Cf that to my first comment above, which seemed to set you off on a campaign starting with, oh, give a precising definition of search:
15 kairosfocusJuly 5, 2017 at 6:58 am DiEb, the problem is it is not hard for a search space to become so large that reasonable search becomes impossible. Under those conditions, match of strategy, start-point, and specifics of space become important, especially if a space does not have the sort of convenient pointing slopes that some use to convey a misleading impression of the likelihood of searches. Where, 500 bits specifies 3.27*10^150 possibilities and 1000, 1.07*10^301. The first exhausts sol system resources and the latter, those of the observed cosmos. And of course, with a suitable description language, all searches come down to searches on binary spaces of suitable bit depth. KF
kairosfocus
@KF: sorry, I was not aware that I was the one who played tangential word games... DiEb
PS: BTW, money shot from this new book -- and again a key issue that has been resisted for years by objectors in and around UD:
Design is an inherently iterative process requiring domain intelligence and expertise. Domain knowledge and experience can be applied to the procedure to decrease the time needed for a successful search. Because of the exponential explosion of possibilities (i.e. the curse of dimensionality), the time and resources required by blind search quickly become too large to apply. Undirected Darwinian evolution has neither the time nor computational resources to design anything of even moderate complexity. External knowledge is needed. Neither quantum computing nor Moore's law makes a significant dent in these requirements. [Marks, Dembski, Ewert, Introduction to Evolutionary Informatics, World Scientific, 2017, p.59.]
kairosfocus
DiEb, if you want to play tangential word games Mathematics is exactly NOT a science. In fact I used words in a reasonable manner, in the specific context of the Gibbsean distribution of probabilities across microstates; which extends into information metrics. But then you said already that you were picking up snippets and commenting -- cf 41 and 46 above. And in turn, that was in context of what seems to have gone poof once I cited, highlighted and noted on Marks' remarks on the context of the current book. Which showed that my earlier remarks were in fact relevant. KF kairosfocus
@daveS: "Does the distribution P_s(k) that DEM talk about then give you the probability that search S finds the target, given that the target was k?" Exactly. As DEM write in "A General Theory of Information Cost Incurred by Successful Search":
Applying the discriminator Δ to this random search matrix thus yields an Ω-valued random variable Δ(S), which we denote by X_S . As an Ω-valued random variable, X_S therefore induces a probability distribution μ_s on Ω that entirely characterizes the probability of S successfully locating the target T. In this way, an arbitrary search S can be represented as a single probability distribution or measure μ_s on the original search space Ω. This representation will be essential throughout the sequel.
DiEb
@KF: Mathematics in an exact science where words have meanings. You cannot simulate mathematics with a torrent of unspecified words. DiEb
DiEb, I am not using distribution in the sense of a closed form function we can readily state or statistically readily identify but in the simple descriptive sense that everything from flat random to some things certainly so and others certainly not so comprise distributions of probabilities. Notice, hitherto I have not spoken to probability distribution functions but to the sort of thing we may see with a fair vs a loaded die to use a simple case in point. The onward return to focal issues is as I already stated. KF kairosfocus
DiEb, The distribution δ_7 is not at all what I had in mind for this search. I was thinking more in terms of a distribution which guides the selection of samples. Does the distribution P_s(k) that DEM talk about then give you the probability that search S finds the target, given that the target was k? daveS
Thanks for the reply, DiEb. daveS
@KF: there is a difference between "having probabilities for events" and the concept of a "probability distribution"! If I give you two chances to guess a number randomly chosen from the set {1,2,3}, you could guess each event correctly with probability 2/3 - that does not give you a "probability distribution" on {1,2,3}.... DiEb
daveS: "What precisely is the distribution in the example I cited? I’m not necessarily doubting what you are saying, but I want to be clear about what it is." The method leads to identifying the number "7" with probability 1. According to DEM, the induced probability distribution is $\delta_7$. What if 8 was the target? One would think that you should find it with probability 1, too. No, not according to DEM: you will find it with $\delta_7 (8) = 0$, ergo never. But wouldn't you be let to the number 8 by the answers of the oracle/values of the function? No, as DEM separate the minimum of the function and the actual target: you have no way to identify the target within your search. That's why I bring up the complete search: Checking every single number should reveal the target. But not in the universe of DEM: That's what I call paradoxical. With DEM you have to produce your candidate for a target, and they will tell you whether it was the target afterwards. I'd define a search and a search algorithm along the lines of Wolpert and Macready in their paper on "No Free Lunch Theorems For Optimization": in most of the search problems you have a space and function on this page with a range of (at least partially) ordered values. The target is the element where the function reaches its optimum, so optimization and search are two sides of the same medal. For an unassisted search, the function is just the characteristic function of the target, other problems provide more complex functions: for the WEASEL, you have the Hamming-distance. You can think of the Traveling-Salesman-Problem (TSP) as a search for the shortest way through all cities. In the first two examples, you know the optimum of the function beforehand, so you can identify your target during the search. For the TSP, the optimum is not known upfront. But at least theoretically, you can enumerate all possible ways and identify the optimum, ergo find the target. DiEb
DS, I am not interested in whether a distribution has a given closed form or can be reconstructed from stochastic studies, the relevant point is, once you select or sample from a set, some distribution will be there. Even, the crude, I will never sit in row 13 -- oops, 12A -- on a flight. (I think some people will refuse to fly rather than sit in that row.) Back to focal issues, sampling selects a subset (up to the point it's a census) and the set of all searches of a config space will be the set of its subsets. This gives us a simple way to see how search for a golden search will be exponentially harder than a direct search. Back on more core focus, when we have a case where blind search of a config space is forced to be extremely sparse (by way of resource exhaustion of sol system or observed cosmos) it then shows why deeply isolated islands of function will be effectively unobservable -- by overwhelming statistical weight of non-functional/meaningless states. This then leads to the dominance of fruitless needle in haystack blind search challenge over hoped-for hill climbing by incremental change of members of populations through recombinations, mutations etc. And, FSCO/I will naturally come in that needle in haystack pattern as components have to be pretty much right, have to be properly oriented, and have to be correctly arranged and coupled for coherent function to emerge. KF kairosfocus
KF,
DS, any search or selection process on a set will impose a probability distribution. Certainty is also a probability level. KF
What precisely is the distribution in the example I cited? I'm not necessarily doubting what you are saying, but I want to be clear about what it is. daveS
DS, any search or selection process on a set will impose a probability distribution. Certainty is also a probability level. KF kairosfocus
DiEb & KF,
One of the most problematic sentences is on page 173: “We note, however, the choice of an [search] algorithm along with its parameters and initialization imposes a probability distribution over the search space”.
This is over my head, and I haven't read the book, but I'm trying to understand what probability distribution would arise in the case of a binary search. Here's an example lifted from the wikipedia page on binary search. The goal of the search is to find the "7" in the sorted array. I presume that the "search space" consists of the set of numbers in the original array. Does the algorithm naturally define a probability distribution on that space? daveS
A separation of the search (binary strings) and solution spaces (sentences) which removes the necessity to exclusively adopt a variable-length genetic algorithm (or even any evolutionary algorithm!) as is standard in GE as the search engine. The search operators of the evolutionary algorithm themselves ... - Natural Computing Algorithms
DiEb demands that they define "search"! Or not. Mung
@DiEb: I’m looking forward to your definition of the term “search”! Mung
This leads us to the notion that the process of problem solving can be viewed as a search through a potentially huge set of possibilities to find the desired solution. Consequently, the problems that are to be solved this way can be seen as search problems. - Introduction to Evolutionary Computing
Maybe they need to define 'search.' If not, why not? DiEb, you sent them a demand letter, didn't you? Mung
DiEb, I am looking forward to some pretty graphs from you! Maybe those pretty graphs you post over at TSZ don't depend on search though. Why don't you define search? Mung
DiEb, The above directly shows the relevance of considerations above to the purpose and issues discussed in the book, AS WAS INDICATED BY AN AUTHOR. Second, sufficient description and symbolisation of search has been given long since in this thread. All I will add here, is that in a context of self-replicating entities with related metabolic and other functions [the von Neumann Kinematic self replicator with an integral functioning entity is a picture], success filtering would be by reproductive success across generations in relevant environments. And if you wish a relevant information structure, try DNA and changes by mutation etc. The net result will clearly be that even moderate quantities of information will not emerge by such a blind search process. Which, starting from some initial case -- that already is a huge jump start -- will be by a random walk process. KF PS: I particularly point you to 41, which is in fact mostly Marks, not me. If yoiu refuse to read more than a couple of hundred words by an author then fail to perceive the connexions between this book and the corpus of work over years, thence relevance of my own remarks, that failure is obviously of your own making. I suggest, start from 41, an approximation to a lost comment. kairosfocus
@KF: I'm looking forward to your definition of the term "search"! DiEb
KF: "DiEb, I have just showed how one of the authors connects to the corpus, and above you spoke to that corpus. My point is, that corpus is not in isolation from wider sampling, thermodynamics and information issues; also, the points I made about searches are directly relevant. KF" Your "points" are not relevant for a discussion of the book. F/N (does this mean further notice?): I hope that you do not expect anyone to read more than a couple of hundred words of your comments! DiEb
PPS: More from the infamous corpus, from fully five years past:
Algorithmic Specified Complexity Winston Ewert, William A. Dembski, and Robert J. Marks II October 16, 2012 Abstract As engineers we would like to think that we produce something different from that of a chaotic system. The Eiffel tower is fundamentally different from the same components lying in a heap on the ground. Mt. Rushmore is fundamentally different from a random mountainside. But we lack a good method for quantifying this idea. This has led some to reject the idea that we can detect engineered or designed systems. Various methods have been proposed each of which has various faults. Some have trouble distinguishing noise from data, some are subjective, etc. We propose to use conditional Kolmogorov complexity to measure the degree of specification of an object. The Kolmogorov complexity of an object, is the length of the shortest computer program required to describe that object. Conditional Kolmogorov complexity is Kolmogorov complexity, with access to a context. The program can extract information from the context in a variety of ways allowing more compression. The more compressible an object is the more we may deem the object specified. Random noise is incompressible, and so compression indicates that the object is not simply random noise. We hope this model launches further dialog on use of conditional Kolmogorov complexity in the measurement of specified complexity
kairosfocus
PS: Let's start with p. 9 in the book, where Mt Rushmore is compared to the Mars Lander 1976 famous face-like shadow. How are the two to be distinguished as to credible causal source? (Despite enthusiastic UFO fans, the Mars face is credibly accident.) Marks et al answer: SPECIFIED COMPLEXITY. (And BTW, as a Google search just exposed Wiki on, CSI contrary to insistent distortions and willful misrepresentations to the point of deceit, did not originate with Dembski nor is it "pseudoscientific" -- a fairly meaningless dismissive epithet given failure of demarcation arguments. On the contrary Orgel's 1973 discussion, cited ever so many times, is decisive.) Namely, that we are not just dealing with a vaguely human face but with four highly specific portraits with a degree of complexity and specificity as to be identifiable as four particular men, famous Presidents of the USA. This then allows addressing functional, meaningful information and various metrics from Dembski et al to Gitt and Durston et al. So, yes, we go beyond Shannon info that cannot tell the difference between a Blu-Ray disk with a movie on it and one full of noise. And again the relevance and connexion of the above to the corpus are underscored. The authors then point to ch 7, for details on Algorithmic Specified Complexity which allows measuring meaningful info. kairosfocus
F/N: Let me clip again from Marks at ENV, this time with highlights and annotations. I hope this goes through without a hitch: __________________ https://evolutionnews.org/2017/06/top-ten-questions-and-objections-to-introduction-to-evolutionary-informatics/ >>3. You model evolution as a search. Evolution isn’t a search. We echo Billy Joel: “We didn’t start the fire!” Models of Darwinian evolution, Avida and EV included, are searches with a fixed goal. [--> so, search is relevant, and thus the fact that a search produces a sample of the set of possibilities, i.e. a subset, thence that the set of possible searches is tantamount to the set of subsets] For EV, the goal is finding specified nucleotide binding sites. Avida’s goal is to generate an EQU logic function. Other evolution models that we examine in Introduction to Evolutionary Informatics likewise seek a prespecified goal. The evolution software Avida is of particular importance because Robert Pennock, one of the co-authors of the first paper describing Avida,4 gave testimony at the Darwin-affirming Kitzmiller et al. v. Dover Area School District bench trial. Pennock’s testimony contributed to Judge Jones’s ruling that teaching about intelligent design violates the establishment clause of the United States Constitution. Pennock testified, “In the [Avida computer program] system, we’re not simulating evolution. Evolution is actually happening.” If true, Avida and thus evolution are a guided search with a specified target bubbling over with active information supplied by the programmers. The most celebrated attempt of an evolution model without a goal of which we’re aware is TIERRA. In an attempt to recreate something like the Cambrian explosion on a computer, the programmer created what was thought to be an information-rich environment where digital organisms would flourish and evolve. According to TIERRA’s ingenious creator, Thomas Ray, the project failed and was abandoned. There has to date been no success in open-ended evolution in the field of artificial life.5 Therefore, there exists no model successfully describing undirected Darwinian evolution. 4. You are not biologists. Why should anyone listen to you about evolution? Leave aside that this question reeks of the genetic fallacy used in debate to steer conversation away from the topic at hand and down a rabbit trail of credential defense. The question is sincere, though, and deserves an answer. Besides, it lets me talk about myself. The truth is that computer scientists and engineers know a lot about evolution and evolution models. As we outline in Introduction to Evolutionary Informatics, proponents of Darwinian evolution became giddy about computers in the 1960s and 70s. Evolution was too slow to demonstrate in a wet lab, but thousands and more generations of evolution can be put in the bank when Darwinian evolution is simulated on a computer. Computer scientists and engineers soon realized that evolutionary search might assist in making computer-aided designs. In Introduction to Evolutionary Informatics, we describe how NASA engineers used guided evolutionary programs to design antennas resembling bent paper clips that today are floating and functioning in outer space. Here’s my personal background. I first became interested in evolutionary computation late last century when I served as editor-in-chief of the IEEE6 Transactions on Neural Networks.7 I invited top researchers in the field, David Fogel and his father Larry Fogel, to be the guest editors of a special issue of my journal dedicated to evolutionary computing.8 The issue was published in January 1994 and led to David founding the IEEE Transactions on Evolutionary Computing9 which today is the top engineering/computer science journal dedicated to the topic. My first conference paper using evolutionary computing was published a year later10 and my first journal publication on evolutionary computation was in 1999.11 That was then. More recently my work, funded by the Office of Naval Research, involves simulated evolution of swarm dynamics motivated by the remarkable self-organizing behavior of social insects. Some of the results were excitingly unexpected12 including individual member suicidal sacrifice to extend the overall lifetime of the swarm.13 Evolving digital swarms is intriguing and we have a whole web site devoted to the topic.14 So I have been playing in the evolutionary sandbox for a long time and have dirt under my fingernails to prove it. But is it biology? In reviewing our book for the American Scientific Affiliation (ASA), my friend Randy Isaac, former executive director of the ASA, said of our book, “Those seeking insight into biological or chemical evolution are advised to look elsewhere.”15 We agree! But if you are looking for insights into the models and mathematics thus far proposed by supporters of Darwinian evolution that purport to describe the theory, Introduction to Evolutionary Informatics is spot on. And we show there exists no model successfully describing undirected Darwinian evolution. [--> purpose for and achievement of, the book.] 5. You use probability inappropriately. Probability theory cannot be applied to events that have already happened. In the movie Dumb and Dumber, Jim Carey’s character, Lloyd Christmas, is brushed off by beautiful Mary “Samsonite” Swanson when told his chances with her are one in a million. After a pause for introspective reflection, Lloyd’s emergent toothy grin shows off his happy chipped tooth. He enthusiastically blurts out, “So you’re telling me there’s a chance!” Similar exclamations are heard from Darwinian evolutionist advocates. “Darwinian evolution. So you’re telling me there’s a chance!” So again, we didn’t start the probability fire. Evolutionary models thrive on randomness described by probabilities. The probability-of-the -gaps championed by supporters of Darwinian evolution is addressed in detail in Introduction to Evolutionary Informatics. We show that the probability resources of the universe and even string theory’s hypothetical multiverse are insufficient to explain the specified complexity surrounding us. [--> So, we are looking at exceedingly low odds of success blind search, just as I pointed out] Besides, a posteriori probability is used all the time. The size of your last tweet can be measured in bits. Claude Shannon, who coined the term bits in his classic 1948 paper,16 based the definition of the bit on probability. Yet there sits your transmitted tweet with all of its a posteriori bits fully exposed. [--> Shannon info and onward issues, thus distributions with uneven probabilities and the Gibbsean approach, indeed normal English text has e as about 1/8 of the text . . . a phenomenon long known to old fashioned printers with cases of letters] Another example is a posteriori Bayesian probability commonly used, for example, in email spam filters. What is the probability that your latest email from a Nigerian prince, already received and written on your server, is spam? Bayesian probabilities are also a posteriori probabilities. So a hand-waving dismissal of a posteriori probabilities is ill-tutored. The application of probability in Introduction to Evolutionary Informatics is righteous and the analysis leads to the conclusion that there exists no model successfully describing undirected Darwinian evolution. 6. What about a biological anthropic principle? We’re here, so evolution must work. Stephen Hawking has a simple explanation of the anthropic principle: “If the conditions in the universe were not suitable for life, we would not be asking why they are as they are.” Gabor Csanyi, who quotes from Hawking’s talk, says, “Hawking claims, the dimensionality of space and amount of matter in the universe is [a fortuitous] accident, which needs no further explanation.”17 “So you’re telling me there’s a chance!” The question ignored by anthropic principle enthusiasts is whether or not an environment for even guided evolution could occur by chance. If a successful search requires equaling or exceeding some degree of active information, what is the chance of finding any search with as good or better performance? We call this a search-for-the-search. [--> active info puts you essentially at shores of function,taming needle in haystack search challenge. It is a case of intelligently directed configuration, i.e. design.] In Introduction to Evolutionary Informatics, we show that the search-for-the-search is exponentially more difficult that the search itself! So if you kick the can down the road, the can gets bigger. [--> a result made plausible from the set of subsets observation, before going into the detailed analysis. If a set has cardinality n, the set of subsets has cardinality 2^n. 500 bits gives 3.27*10150 possibilities, and 1000 bits, 1.07*10^301. Power sets for these are in calculator smoking territory. Relevance of my remarks continues to be underscored, both on the corpus and the specific current work.] Professor Sydney R. Coleman said after the Hawking’s MIT talk, “Anything else is better [than the ‘Anthropic Principle’ to explain something].”18 We agree. For example, check out our search-for-the-search analysis in Introduction to Evolutionary Informatics. 7. What about the claim that “All information is physical”? This is a question we have heard from physicists. In physics, Landauer’s principle pertains to the lower theoretical limit of energy consumption of computation and leads to his statement “all information is physical.” Saying “All computers are mass and energy” offers a similar nearly useless description of computers. Like Landauer’s principle, it suffers from the same overgeneralized vagueness and is at best incomplete. Claude Shannon counters Landauer’s claim: It seems to me that we all define “information” as we choose; and, depending upon what field we are working in, we will choose different definitions. My own model of information theory…was framed precisely to work with the problem of communication.19 Landauer is probably correct within the narrow confines of his physics foxhole. Outside the foxhole is Shannon information which is built on unknown a priori probability of events which have not yet happened and are therefore not yet physical. We spend an entire chapter in Introduction to Evolutionary Informatics defining information so there is no confusion when the concept is applied. And we conclude there exists no model successfully describing undirected Darwinian evolution. 8. Information theory cannot measure meaning. Poppycock. A hammer, like information theory, is a tool. A hammer can be used to do more than pound nails. And information theory can do more than assign a generic bit count to an object. The most visible information theory models are Shannon information theory and KCS information.20 The consequence of Shannon’s theory on communication theory is resident in your cell phone where codes predicted by Shannon today allow maximally efficient use of available bandwidth. KCS stands for Kolmogorov-Chaitin-Solomonoff information theory named after the three men who independently founded the field. KCS information theory deals with the information content of structures. (Gregory Chaitin, by the way, gives a nice nod-of-the-head to Introduction to Evolutionary Informatics.21) The manner in which information theory can be used to measure meaning is addressed in Introduction to Evolutionary Informatics. We explain, for example, why a picture of Mount Rushmore containing images of four United States presidents has more meaning to you than a picture of Mount Fuji even though both pictures might require the same number of bits when stored on your hard drive. The degree of meaning can be measured using a metric called algorithmic specified complexity. [--> Algorithms of course are functionally coherent and functionally specific structures and are meaningful.] Rather than summarize algorithmic specified complexity derived and applied in Introduction to Evolutionary Informatics, we refer instead to a quote from a paper from one of the world’s leading experts in algorithmic information theory, Paul Vitányi. The quote is from a paper he wrote over 15 years ago, titled “Meaningful Information.”22 One can divide…[KCS] information into two parts: the information accounting for the useful regularity [meaningful information] present in the object and the information accounting for the remaining accidental [meaningless] information.23 In Introduction to Evolutionary Informatics, we use information theory to measure meaningful information and show there exists no model successfully describing undirected Darwinian evolution. 9. To achieve specified complexity in nature, the fitness landscape in evolution keeps changing. So, contrary to your claim, Basener’s ceiling doesn’t apply in Darwinian evolution. In search, complexity can’t be achieved beyond the expertise of the guiding oracle. As noted, we refer to this limit as Basener’s ceiling.24 However, if the fitness continues to change, it is argued, the evolved entity can achieve greater and greater specified complexity and ultimately perform arbitrarily great acts like writing insightful scholarly books disproving Darwinian evolution. We analyze exactly this case in Introduction to Evolutionary Informatics and dub the overall search structure stair step active information. Not only is guidance required on each stair, but the next step must be carefully chosen to guide the process to the higher fitness landscape and therefore ever increasing complexity. Most of the next possible choices are deleterious and lead to search deterioration and even extinction. This also applies in the limit when the stairs become teeny and the stair case is better described as a ramp. As Aristotle said, “It is possible to fail in many ways…while to succeed is possible only in one way.” [--> For islands of function ponder sandy barrier islands that can and do chance location and shape, then apply to so called fitness functions] Here’s an anecdotal illustration of the careful design needed in the stair step model. If a meteor hits the Yucatan Peninsula and wipes out all the dinosaurs and allows mammals to start domination of the earth, then the meteor’s explosion must be a Goldilocks event. If too strong all life on earth would be zapped. If too weak, velociraptors would still be munching on stegosaurus eggs. Such fine tuning is the case of any fortuitous shift in fitness landscapes and increases, not decreases, the difficulty of evolution of ever-increasing specified complexity. It supports the case there exists no model successfully describing undirected Darwinian evolution. [--> Goldilocks zones or islands of function at a higher level] 10. Your research is guided by your ideology and can’t be trusted. There’s that old derailing genetic fallacy again. But yes! Of course, our research is impacted by our ideology! We are proud to be counted among Christians such as the Reverend Thomas Bayes, Isaac Newton, George Washington Carver, Michael Faraday, and the greatest of all mathematicians, Leonard Euler.25 The truth of their contributions stand apart from their ideology. But so does the work of atheist Pierre-Simon Laplace. Truth trumps ideology. And allowing the possibility of intelligent design, embraced by enlightened theists and agnostics alike, broadens one’s investigative horizons. [--> as in bye bye methodological naturalism] Alan Turing, the brilliant father of computer science and breaker of the Nazi’s enigma code, offers a great example of the ultimate failure of ideology trumping truth. As a young man, Turing lost a close friend to bovine tuberculosis. Devastated by the death, Turing turned from God and became an atheist. He was partially motivated in his development of computer science to prove man was a machine and consequently that there was no need for a god. But Turing’s landmark work has allowed researchers, most notably Roger Penrose,26 to make the case that certain of man’s attributes including creativity and understanding are beyond the capability of the computer. Turing’s ideological motivation was thus ultimately trashed by truth. The relationship between human and computer capabilities is discussed in more depth in Introduction to Evolutionary Informatics. Take Aways In Introduction to Evolutionary Informatics, Chaitin’s challenge has been met in the negative and there exists no model successfully describing undirected Darwinian evolution. According to our current understanding, there never will be. But science should never say never. As Stephen Hawking notes, nothing in science is ever actually proved. We simply accumulate evidence.27 So if anyone generates a model demonstrating Darwinian evolution without guidance that ends in an object with significant specified complexity, let us know. No guiding, hand waving, extrapolation of adaptations, appealing to speculative physics, or anecdotal proofs allowed. Until then, I guess you can call us free-thinking skeptics. [--> Of course this pivots on search to space match and resource exhaustion before a search can achieve the near census proportions that give reasonable odds of finding deeply isolated islands of function.] Thanks for listening. Robert J. Marks II PhD is Distinguished Professor of Electrical and Computer Engineering at Baylor University. >> ___________________ Okay, I think we can therefore clear away the raft of objections, unjustified lock-outs and claims of irrelevant focus so far. Now, can we get down to the meat of the matter? KF kairosfocus
DiEb, I have just showed how one of the authors connects to the corpus, and above you spoke to that corpus. My point is, that corpus is not in isolation from wider sampling, thermodynamics and information issues; also, the points I made about searches are directly relevant. KF kairosfocus
KF: "Here, then, is my challenge: demonstrate, with citation, how the issues I have put on the table are out of line with the corpus and the current work. Use specific substantial citations not general dismissive talking points or critical summaries by objectors likely to be riddled with distortions." At this thread I'd like to discuss matters pertinent to DEM's latest book, not to ID in general. So I won't wade to thousands and thousands of words just to check whether there is something in Dembski's and Marks's oeuvre which relates to them. The easiest way (for both of us) to check whether anything you have written in this thread has anything to do with "Evolutionary Informatics" would be for you to read the book. But I will take a look at your comment #36: "DiEb, I am highly confident that the work of Dembski and that of Marks is a corpus." Sometimes I think of it more of a corpse, but that's just me "You have challenged foundational issues on that corpus, tot he point of publicly doubting that a sample from a configuration space effects a subset of that space, and professing to have doubts as to how samples may vary from flat random to utterly biased." Wow, I wasn't aware of that. "In addition, you seem to be ignorant of the statistical mechanics background for information theory, and for the informational school of thermodynamics that also lies in teh background of our considerations." Thermodynamics is mentioned only once in DEM's new book: as an example for a well-grounded scientific theory. So, I won't discuss thermodynamics on this thread. "You seem to be ignorant of the spacific definition of CSI in NFL." Yes, indeed, I am. "You have side-steppes the issues of endogenous and active information much less the challenge of search for a golden search." I have not discussed DEM's concepts of active information, but I will do so in the framework of the book. "That pattern of arguing leaves me far less than confident that you have a coherent unerstanding of the corpus you would critique." Seems to be quite a leap. "Indeed you come across as majoring on the objections (which in this general context are far too often riddled with misunderstandings, at best)." I failed to parse this sentence. "Indeed, it is the gap between that background and your claims that led me to intervene here at all." Umm, thanks for the intervention? DiEb
F/N: Looks like I lost a comment. Let me clip from marks on the current work, but for now this will have no highlights etc: ______________________ https://evolutionnews.org/2017/06/top-ten-questions-and-objections-to-introduction-to-evolutionary-informatics/ >>3. You model evolution as a search. Evolution isn’t a search. We echo Billy Joel: “We didn’t start the fire!” Models of Darwinian evolution, Avida and EV included, are searches with a fixed goal. For EV, the goal is finding specified nucleotide binding sites. Avida’s goal is to generate an EQU logic function. Other evolution models that we examine in Introduction to Evolutionary Informatics likewise seek a prespecified goal. The evolution software Avida is of particular importance because Robert Pennock, one of the co-authors of the first paper describing Avida,4 gave testimony at the Darwin-affirming Kitzmiller et al. v. Dover Area School District bench trial. Pennock’s testimony contributed to Judge Jones’s ruling that teaching about intelligent design violates the establishment clause of the United States Constitution. Pennock testified, “In the [Avida computer program] system, we’re not simulating evolution. Evolution is actually happening.” If true, Avida and thus evolution are a guided search with a specified target bubbling over with active information supplied by the programmers. The most celebrated attempt of an evolution model without a goal of which we’re aware is TIERRA. In an attempt to recreate something like the Cambrian explosion on a computer, the programmer created what was thought to be an information-rich environment where digital organisms would flourish and evolve. According to TIERRA’s ingenious creator, Thomas Ray, the project failed and was abandoned. There has to date been no success in open-ended evolution in the field of artificial life.5 Therefore, there exists no model successfully describing undirected Darwinian evolution. 4. You are not biologists. Why should anyone listen to you about evolution? Leave aside that this question reeks of the genetic fallacy used in debate to steer conversation away from the topic at hand and down a rabbit trail of credential defense. The question is sincere, though, and deserves an answer. Besides, it lets me talk about myself. The truth is that computer scientists and engineers know a lot about evolution and evolution models. As we outline in Introduction to Evolutionary Informatics, proponents of Darwinian evolution became giddy about computers in the 1960s and 70s. Evolution was too slow to demonstrate in a wet lab, but thousands and more generations of evolution can be put in the bank when Darwinian evolution is simulated on a computer. Computer scientists and engineers soon realized that evolutionary search might assist in making computer-aided designs. In Introduction to Evolutionary Informatics, we describe how NASA engineers used guided evolutionary programs to design antennas resembling bent paper clips that today are floating and functioning in outer space. Here’s my personal background. I first became interested in evolutionary computation late last century when I served as editor-in-chief of the IEEE6 Transactions on Neural Networks.7 I invited top researchers in the field, David Fogel and his father Larry Fogel, to be the guest editors of a special issue of my journal dedicated to evolutionary computing.8 The issue was published in January 1994 and led to David founding the IEEE Transactions on Evolutionary Computing9 which today is the top engineering/computer science journal dedicated to the topic. My first conference paper using evolutionary computing was published a year later10 and my first journal publication on evolutionary computation was in 1999.11 That was then. More recently my work, funded by the Office of Naval Research, involves simulated evolution of swarm dynamics motivated by the remarkable self-organizing behavior of social insects. Some of the results were excitingly unexpected12 including individual member suicidal sacrifice to extend the overall lifetime of the swarm.13 Evolving digital swarms is intriguing and we have a whole web site devoted to the topic.14 So I have been playing in the evolutionary sandbox for a long time and have dirt under my fingernails to prove it. But is it biology? In reviewing our book for the American Scientific Affiliation (ASA), my friend Randy Isaac, former executive director of the ASA, said of our book, “Those seeking insight into biological or chemical evolution are advised to look elsewhere.”15 We agree! But if you are looking for insights into the models and mathematics thus far proposed by supporters of Darwinian evolution that purport to describe the theory, Introduction to Evolutionary Informatics is spot on. And we show there exists no model successfully describing undirected Darwinian evolution. 5. You use probability inappropriately. Probability theory cannot be applied to events that have already happened. In the movie Dumb and Dumber, Jim Carey’s character, Lloyd Christmas, is brushed off by beautiful Mary “Samsonite” Swanson when told his chances with her are one in a million. After a pause for introspective reflection, Lloyd’s emergent toothy grin shows off his happy chipped tooth. He enthusiastically blurts out, “So you’re telling me there’s a chance!” Similar exclamations are heard from Darwinian evolutionist advocates. “Darwinian evolution. So you’re telling me there’s a chance!” So again, we didn’t start the probability fire. Evolutionary models thrive on randomness described by probabilities. The probability-of-the -gaps championed by supporters of Darwinian evolution is addressed in detail in Introduction to Evolutionary Informatics. We show that the probability resources of the universe and even string theory’s hypothetical multiverse are insufficient to explain the specified complexity surrounding us. Besides, a posteriori probability is used all the time. The size of your last tweet can be measured in bits. Claude Shannon, who coined the term bits in his classic 1948 paper,16 based the definition of the bit on probability. Yet there sits your transmitted tweet with all of its a posteriori bits fully exposed. Another example is a posteriori Bayesian probability commonly used, for example, in email spam filters. What is the probability that your latest email from a Nigerian prince, already received and written on your server, is spam? Bayesian probabilities are also a posteriori probabilities. So a hand-waving dismissal of a posteriori probabilities is ill-tutored. The application of probability in Introduction to Evolutionary Informatics is righteous and the analysis leads to the conclusion that there exists no model successfully describing undirected Darwinian evolution. 6. What about a biological anthropic principle? We’re here, so evolution must work. Stephen Hawking has a simple explanation of the anthropic principle: “If the conditions in the universe were not suitable for life, we would not be asking why they are as they are.” Gabor Csanyi, who quotes from Hawking’s talk, says, “Hawking claims, the dimensionality of space and amount of matter in the universe is [a fortuitous] accident, which needs no further explanation.”17 “So you’re telling me there’s a chance!” The question ignored by anthropic principle enthusiasts is whether or not an environment for even guided evolution could occur by chance. If a successful search requires equaling or exceeding some degree of active information, what is the chance of finding any search with as good or better performance? We call this a search-for-the-search. In Introduction to Evolutionary Informatics, we show that the search-for-the-search is exponentially more difficult that the search itself! So if you kick the can down the road, the can gets bigger. Professor Sydney R. Coleman said after the Hawking’s MIT talk, “Anything else is better [than the ‘Anthropic Principle’ to explain something].”18 We agree. For example, check out our search-for-the-search analysis in Introduction to Evolutionary Informatics. 7. What about the claim that “All information is physical”? This is a question we have heard from physicists. In physics, Landauer’s principle pertains to the lower theoretical limit of energy consumption of computation and leads to his statement “all information is physical.” Saying “All computers are mass and energy” offers a similar nearly useless description of computers. Like Landauer’s principle, it suffers from the same overgeneralized vagueness and is at best incomplete. Claude Shannon counters Landauer’s claim: It seems to me that we all define “information” as we choose; and, depending upon what field we are working in, we will choose different definitions. My own model of information theory…was framed precisely to work with the problem of communication.19 Landauer is probably correct within the narrow confines of his physics foxhole. Outside the foxhole is Shannon information which is built on unknown a priori probability of events which have not yet happened and are therefore not yet physical. We spend an entire chapter in Introduction to Evolutionary Informatics defining information so there is no confusion when the concept is applied. And we conclude there exists no model successfully describing undirected Darwinian evolution. 8. Information theory cannot measure meaning. Poppycock. A hammer, like information theory, is a tool. A hammer can be used to do more than pound nails. And information theory can do more than assign a generic bit count to an object. The most visible information theory models are Shannon information theory and KCS information.20 The consequence of Shannon’s theory on communication theory is resident in your cell phone where codes predicted by Shannon today allow maximally efficient use of available bandwidth. KCS stands for Kolmogorov-Chaitin-Solomonoff information theory named after the three men who independently founded the field. KCS information theory deals with the information content of structures. (Gregory Chaitin, by the way, gives a nice nod-of-the-head to Introduction to Evolutionary Informatics.21) The manner in which information theory can be used to measure meaning is addressed in Introduction to Evolutionary Informatics. We explain, for example, why a picture of Mount Rushmore containing images of four United States presidents has more meaning to you than a picture of Mount Fuji even though both pictures might require the same number of bits when stored on your hard drive. The degree of meaning can be measured using a metric called algorithmic specified complexity. Rather than summarize algorithmic specified complexity derived and applied in Introduction to Evolutionary Informatics, we refer instead to a quote from a paper from one of the world’s leading experts in algorithmic information theory, Paul Vitányi. The quote is from a paper he wrote over 15 years ago, titled “Meaningful Information.”22 One can divide…[KCS] information into two parts: the information accounting for the useful regularity [meaningful information] present in the object and the information accounting for the remaining accidental [meaningless] information.23 In Introduction to Evolutionary Informatics, we use information theory to measure meaningful information and show there exists no model successfully describing undirected Darwinian evolution. 9. To achieve specified complexity in nature, the fitness landscape in evolution keeps changing. So, contrary to your claim, Basener’s ceiling doesn’t apply in Darwinian evolution. In search, complexity can’t be achieved beyond the expertise of the guiding oracle. As noted, we refer to this limit as Basener’s ceiling.24 However, if the fitness continues to change, it is argued, the evolved entity can achieve greater and greater specified complexity and ultimately perform arbitrarily great acts like writing insightful scholarly books disproving Darwinian evolution. We analyze exactly this case in Introduction to Evolutionary Informatics and dub the overall search structure stair step active information. Not only is guidance required on each stair, but the next step must be carefully chosen to guide the process to the higher fitness landscape and therefore ever increasing complexity. Most of the next possible choices are deleterious and lead to search deterioration and even extinction. This also applies in the limit when the stairs become teeny and the stair case is better described as a ramp. As Aristotle said, “It is possible to fail in many ways…while to succeed is possible only in one way.” Here’s an anecdotal illustration of the careful design needed in the stair step model. If a meteor hits the Yucatan Peninsula and wipes out all the dinosaurs and allows mammals to start domination of the earth, then the meteor’s explosion must be a Goldilocks event. If too strong all life on earth would be zapped. If too weak, velociraptors would still be munching on stegosaurus eggs. Such fine tuning is the case of any fortuitous shift in fitness landscapes and increases, not decreases, the difficulty of evolution of ever-increasing specified complexity. It supports the case there exists no model successfully describing undirected Darwinian evolution. 10. Your research is guided by your ideology and can’t be trusted. There’s that old derailing genetic fallacy again. But yes! Of course, our research is impacted by our ideology! We are proud to be counted among Christians such as the Reverend Thomas Bayes, Isaac Newton, George Washington Carver, Michael Faraday, and the greatest of all mathematicians, Leonard Euler.25 The truth of their contributions stand apart from their ideology. But so does the work of atheist Pierre-Simon Laplace. Truth trumps ideology. And allowing the possibility of intelligent design, embraced by enlightened theists and agnostics alike, broadens one’s investigative horizons. Alan Turing, the brilliant father of computer science and breaker of the Nazi’s enigma code, offers a great example of the ultimate failure of ideology trumping truth. As a young man, Turing lost a close friend to bovine tuberculosis. Devastated by the death, Turing turned from God and became an atheist. He was partially motivated in his development of computer science to prove man was a machine and consequently that there was no need for a god. But Turing’s landmark work has allowed researchers, most notably Roger Penrose,26 to make the case that certain of man’s attributes including creativity and understanding are beyond the capability of the computer. Turing’s ideological motivation was thus ultimately trashed by truth. The relationship between human and computer capabilities is discussed in more depth in Introduction to Evolutionary Informatics. Take Aways In Introduction to Evolutionary Informatics, Chaitin’s challenge has been met in the negative and there exists no model successfully describing undirected Darwinian evolution. According to our current understanding, there never will be. But science should never say never. As Stephen Hawking notes, nothing in science is ever actually proved. We simply accumulate evidence.27 So if anyone generates a model demonstrating Darwinian evolution without guidance that ends in an object with significant specified complexity, let us know. No guiding, hand waving, extrapolation of adaptations, appealing to speculative physics, or anecdotal proofs allowed. Until then, I guess you can call us free-thinking skeptics. Thanks for listening. Robert J. Marks II PhD is Distinguished Professor of Electrical and Computer Engineering at Baylor University. Notes: (1) Chaitin, Gregory. Proving Darwin: Making Biology Mathematical. Vintage, 2012. (2) Marks II, Robert J., William A. Dembski, and Winston Ewert. Introduction to Evolutionary Informatics. World Scientific, 2017. (3) Ecclesiastes 12:12b. (4) Lenski, R.E., Ofria, C., Pennock, R.T. and Adami, C., 2003. “The evolutionary origin of complex features.” Nature, 423(6936), pp. 139-144. (5) ID the Future podcast with Winston Ewert. “Why Digital Cambrian Explosions Fizzle…Or Fake It,” June 7, 2017. (6) IEEE, the Institute of Electrical and Electrical Engineers, is the largest professional society in the world, with over 400,000 members. (7) R.J. Marks II, “The Joumal Citation Report: Testifying for Neural Networks,” IEEE Transactions on Neural Networks, vol. 7, no. 4, July 1996, p. 801. (8) Fogel, David B., and Lawrence J. Fogel. “Guest editorial on evolutionary computation,” IEEE Transactions on Neural Networks 5, no. 1 (1994): 1-14. (9) R.J. Marks II, “Old Neural Network Editors Don’t Die, They Just Prune Their Hidden Nodes,” IEEE Transactions on Neural Networks, vol. 8, no. 6 (November, 1997), p. 1221. (10) Russell D. Reed and Robert J. Marks II, “An Evolutionary Algorithm for Function Inversion and Boundary Marking,” Proceedings of the IEEE International Conference on Evolutionary Computation, pp. 794-797, November 26-30, 1995. (11) C.A. Jensen, M.A. El-Sharkawi and R.J. Marks II, “Power Security Boundary Enhancement Using Evolutionary-Based Query Learning,” Engineering Intelligent Systems, vol. 7, no. 9, pp. 215-218 (December 1999). (12) Jon Roach, Winston Ewert, Robert J. Marks II and Benjamin B. Thompson, “Unexpected Emergent Behaviors from Elementary Swarms,” Proceedings of the 2013 IEEE 45th Southeastern Symposium on Systems Theory (SSST), Baylor University, March 11, 2013, pp. 41-50. (13) Winston Ewert, Robert J. Marks II, Benjamin B. Thompson, Albert Yu, “Evolutionary Inversion of Swarm Emergence Using Disjunctive Combs Control,” IEEE Transactions on Systems, Man and Cybernetics: Systems, v. 43, #5, September 2013, pp. 1063-1076. Albert R. Yu, Benjamin B. Thompson, and Robert J. Marks II, “Swarm Behavioral Inversion for Undirected Underwater Search,” International Journal of Swarm Intelligence and Evolutionary Computation, vol. 2 (2013). Albert R. Yu, Benjamin B. Thompson, and Robert J. Marks II, “Competitive Evolution of Tactical Multiswarm Dynamics,” IEEE Transactions on Systems, Man and Cybernetics: Systems, vol. 43, no. 3, pp. 563- 569 (May 2013). Winston Ewert, Robert J. Marks II, Benjamin B. Thompson, Albert Yu, “Evolutionary Inversion of Swarm Emergence Using Disjunctive Combs Control,” IEEE Transactions on Systems, Man and Cybernetics: Systems, vol. 43, no. 5, September 2013, pp. 1063-1076. (14) NeoSwarm.com. (15) Review of Introduction to Evolutionary Informatics, Perspectives on Science and Christian Faith, vol. 69 no. 2, June 2017, pp. 104-108. (16) Claude E. Shannon, “A mathematical theory of communication,” Bell System Technical Journal 27: 379-423 and 623–656. (17) Gabor Csanyi “Stephen Hawking Lectures on Controversial Theory,” The Tech, vol. 119, issue 48, Friday, October 8, 1999. (18) The bracketed insertion in the quote is Csanyi’s, not ours. (19) Quoted in P. Mirowski, Machine Dreams: Economics Becomes a Cyborg Science (New York: Cambridge University Press, 2002), 170. (20) Cover, Thomas M., and Joy A. Thomas. Elements of Information Theory. John Wiley & Sons, 2012. (21) Review for Introduction to Evolutionary Informatics. (22) Paul Vitányi, “Meaningful Information,” in International Symposium on Algorithms and Computation: 13th International Symposium, ISAAC 2002, Vancouver, BC, Canada, November 21-23, 2002. (23) Unlike our approach, Vitányi’s use of the so-called Kolmogorov sufficient statistic here does not take context into account. (24) Basener, W.F., 2013. “Limits of Chaos and Progress in Evolutionary Dynamics.” Biological Information — New Perspectives. World Scientific, Singapore, pp. 87-104. (25) Christian Calculus. (26) See, e.g., Penrose, Roger. Shadows of the Mind. Oxford University Press, 1994. (27) Hawking, Stephen. A Brief History of Time (1988). AppLife, 2014.>> ______________________ The connexions to the corpus and to my thoughts should be reasonably clear, but later. KF PS: Mung, sadly, no graphics in UD comments! kairosfocus
Here, then, is my challenge: demonstrate, with citation, how the issues I have put on the table are out of line with the corpus and the current work. Use specific substantial citations not general dismissive talking points or critical summaries by objectors likely to be riddled with distortions.
Use some pretty graphs too, if you don't mind. Mung
DiEb, I am highly confident that the work of Dembski and that of Marks is a corpus. You have challenged foundational issues on that corpus, tot he point of publicly doubting that a sample from a configuration space effects a subset of that space, and professing to have doubts as to how samples may vary from flat random to utterly biased. In addition, you seem to be ignorant of the statistical mechanics background for information theory, and for the informational school of thermodynamics that also lies in teh background of our considerations. You seem to be ignorant of the spacific definition of CSI in NFL. You have side-steppes the issues of endogenous and active information much less the challenge of search for a golden search. That pattern of arguing leaves me far less than confident that you have a coherent unerstanding of the corpus you would critique. Indeed you come across as majoring on the objections (which in this general context are far too often riddled with misunderstandings, at best). Indeed, it is the gap between that background and your claims that led me to intervene here at all. Here, then, is my challenge: demonstrate, with citation, how the issues I have put on the table are out of line with the corpus and the current work. Use specific substantial citations not general dismissive talking points or critical summaries by objectors likely to be riddled with distortions. KF kairosfocus
KF: "As for your debate points on the latest book, I suggest that whatever is in it will be in more or less this general context" So, you have not read the box but nevertheless, you are writing thousands of words as you think they may be pertinent to a discussion of the book? They are not. At least Dioniso's off-topic remarks were short! And he had not the audacity to ask me if he was perhaps talking about the subject of the book (KF: "Kindly, explain what is wrong or irrelevant with...") If you read DEM's latest work, you may have spotted it for yourself! DiEb
DiEb, statistical thermodynamics cases are all around us and indeed inside us; our very existence as air breathers is premised on the utter reliability of fluctuations in oxygen distribution such that a case where spontaneously there will be no O2 in our breaths for say five minutes are unobservable in effect. I have already pointed out that the spaces relevant to FSCO/I etc are such that on 10^17 s and 10^57 atoms at fast rxn rates, or even for 10^80 atoms, search is necessarily yielding an extremely sparse sample, and islands of function are therefore practically unobservable on blind search. Indeed, as I clipped earlier, this is the context in which active information is applicable. As for your debate points on the latest book, I suggest that whatever is in it will be in more or less this general context. KF kairosfocus
DiEb, you have done statistics, so you are aware that searches can be flat random or biased, in extreme cases picking a definite outcome with certainty. It is also the case that on a needle in haystack case, resources may be such that sufficiently rare cases are unobservable in practice on a blind search though potentially observable in theory. The effect in stat thermo-d terms is similar to why a spontaneous reduction in entropy is effectively unobservable for relevant cases. That is there are predominant clusters that so overwhelm the space by relative statistical weight that they overshadow other clusters, especially given that resources are grossly inadequate to effect a near census. KF kairosfocus
@KF:
it should be obvious that the context for the relevant searches is that a census is infeasible. A direct implication of haystack metaphors.
So you are trying to develop a theory which just does not work on the examples we can actually test it on? Furthermore, DEM introduce God as one player in their games. He is certainly able of performing a complete search. DiEb
@KF:
it should be a triviality to acknowledge that a search of a space of possibilites is a sample from that space;
Okay. A search is taking a sample from a space. But how does this impose a probability distribution over the search space? DiEb
KF, Recently I met a couple of European biologists who told me that informatics was the most important science. I told them i consider informatics a sophisticated tool for scientists to get more work done. Then I added that the most fascinating science today is biology. It's funny that the biologists didn't see how important their own field is until an outsider told them so. Specially someone who is closer to IT than they are. In biology we ain't seen nothin' yet. The most fascinating discoveries are still ahead. We're on the winning side. It's a matter of time before all will realize this. Dionisio
From the discussions in this website one can easily realize that the Darwinian paradigm faces many layers of challenges and none seem close to resolution. Therefore in the hypothetical case where one of the challenges gets resolved, there are more challenges waiting. The problem for the Darwinian ideas is that new discoveries may help to answer some outstanding questions while raising new ones. It seems like a never-ending story. The more we know, more we have to learn. That's the reality in modern biology. Dionisio
KF @26:
The sampled elements are tested against a success filter, and then search terminates for the moment on success, repeats if no success, until resources are used up. The haystack challenge being, that resources are utterly inadequate to attain reasonable odds of success.
This is clearly stated, but understanding requires will. Nobody will understand anything unless they want to understand it. My wife and my daughters sometimes discuss things I'm not interested in understanding. My sons and I sometimes talk about things that the ladies don't care to understand because they don't like the topic (cars, gadgets, technology, sport, etc). Sometimes I mention something about an interesting paper I've read, but none of them care to understand what I want to explain. KF's comments are related to the topic of this discussion thread. But they're understandable only to those who want to understand them. Dionisio
PPS: DiEb, it should be obvious that the context for the relevant searches is that a census is infeasible. A direct implication of haystack metaphors. Another is, census -- complete scan of possibilities -- is not happening so it is not a valid test of the model; the likelihood of success of a relevant search is bounded by 1 where a census is possible but in the relevant cases a blind search is next best thing to 0 probability of success: too much stack, needles isolated, too few resources on scope of sol system or observed cosmos. As for oh unfamiliar with JPG files, Marks is an Electrical Engineer working with computers and Dembski a Mathematician working with computers. You seem to have a serious problem with an elementary point, that searches necessarily sample from a space and so collect subsets. BTW, Rom 1:20 is a readily investigated reference that can readily be followed up, so no implication of assumed familiarity to contrast with JPG . . . Joint Photography Experts Group file format IIRC . . . is appropriate; this is the Google age. And so forth. kairosfocus
DiEb, it seems, sadly, that the issue that evident truth takes priority on relevance has not registered. it should be a triviality to acknowledge that a search of a space of possibilites is a sample from that space; refusal to admit that speaks and says this must be locked out of consideration -- and not on its merits. FYI, it is patent that a search of a config space examines a subset of elements, in serial and/or in parallel, sampling from the space in some pattern. Thus the subspace examined, necessarily, is a subset. (A census would examine the whole space. The problem of deeply isolated zones in vast spaces . . . needle in haystack search . . . is that a near census is required on a blind search which then becomes infeasible on available resources.) If you are unwilling to acknowledge this preliminary, elementary fact (not to mention its obvious relevance), that is already decisive. And not in your favour, with all due respect. For, you have never been a mere troll. Then, when you insist on denying the facts on the earlier work despite my outline, that underscores the point. Please, rethink the rhetorical stance you have taken. KF PS: Notice, this abstract:
Conservation of Information in Search: Measuring the Cost of Success [with Erratum] William A. Dembski and Robert J. Marks II Conservation of information theorems indicate that any search algorithm performs on average as well as random search without replacement unless it takes advantage of problem-specific information about the search target or the search-space structure. Combinatorics shows that even a moderately sized search requires problem-specific information to be successful. Three measures to characterize the information required for successful search are (1) endogenous information, which measures the difficulty of finding a target using random search; (2) exogenous information, which measures the difficulty that remains in finding a target once a search takes advantage of problem-specific information; and (3) active information, which, as the difference between endogenous and exogenous information, measures the contribution of problem-specific information for successfully finding a target. This paper develops a methodology based on these information measures to gauge the effectiveness with which problem-specific information facilitates successful search. It then applies this methodology to various search tools widely used in evolutionary search.
Also, this:
The Search for a Search: Measuring the Information Cost of Higher Level Search William A. Dembski and Robert J. Marks II Needle-in-the-haystack problems look for small targets in large spaces. In such cases, blind search stands no hope of success. Conservation of information dictates any search technique will work, on average, as well as blind search. Success requires an assisted search. But whence the assistance required for a search to be successful? To pose the question this way suggests that successful searches do not emerge spontaneously but need themselves to be discovered via a search. The question then naturally arises whether such a higher-level “search for a search” is any easier than the original search. We prove two results: (1) The Horizontal No Free Lunch Theorem, which shows that average relative performance of searches never exceeds unassisted or blind searches, and (2) The Vertical No Free Lunch Theorem, which shows that the difficulty of searching for a successful search increases exponentially with respect to the minimum allowable active information being sought.
Ask yourself, how does one look for small targets in large spaces and know one has found such? By SAMPLING cases from the space, in parallel and/or in succession. The sampled elements are tested against a success filter, and then search terminates for the moment on success, repeats if no success, until resources are used up. The haystack challenge being, that resources are utterly inadequate to attain reasonable odds of success. From this we can get to active info that converts the problem into a more feasible search. And in each case, sampling and thus selecting a subset of the overall space of possible configurations, is directly implied. Where also, the challenge of finding the shoreline of islands of function dominates and is thus prior to hoped for hill-climbing within such an island. Issues of plateaus, intervening valleys locking into sub-maxima etc obtain within such an island. But first one has to get to some minimal degree of function based on complex specific configuration. The haystack challenge is, not credibly feasible on sol system or cosmos scope resources, absent injection of active information. Which has one credible source: intelligent design. The attempt to lock out my discussion on how search samples and thus selects a subset, fails. kairosfocus
FYI Please, note that until the evo-devo literature shows macro-evolutionary cases of biological systems (ca,d1,d2) that rigorously meet the formulation described @1090 in the thread “A third way of evolution?”, any discussion on related topics is pure speculation. Archaic pseudoscientific hogwash shouldn’t be part of any serious explanation. Dionisio
KF: "I could say more" And so you did - but still is your “definition” irrelevant for this thread as i. it is not used in the book under discussion and ii. it is not used in DEM’s earlier works. iii. it is not clear how it “imposes a probability distribution over the search space”. DiEb
DiEb: 1 --> The issue, proper, is whether searches are in fact samples from a space of possible configurations, i.e. states from a state space. Manifestly, they are, and that is a tie-in with the vast resources of studies of state, phase and configuration. Where, 2 --> it is clear from the body of work by Dembski et al that it is relevant, indeed the very symbol for possibilities spaces in say NFL, comes directly from the familiar Omega of statistical mechanics, and the target or designated zone used is a subset thereof. 3 --> Such is also implied in the concepts of search and active information that on injection shifts the odds of locating an otherwise infeasible target zone into being a much more likely outcome. 4 --> Further to this is is a commonplace of statistical mechanics, Gibbsian form to address an uneven distribution of probabilities of microstates, hence expressions of form SUM pi ln pi. 5 --> This very structure is exactly the pattern that is a metric of information in info theory, average info per symbol. 6 --> Yet further, there is an informational school of thought on thermodynamics which defines entropy as avg missing info on particular microstate when all we have is the gross macrostate; which points to the search issue. 7 --> I could say more, but the point seems to be sufficiently shown that the context and underlying references for the work of Dembski et al has been missed. 8 --> Returning to my points above, blind chance and/or mechanoical necessity can be seen as sampling a config space; indeed a classic tool is to in imagination construct a vast array of similar stat mech systems with similar initial start-points and allow them to play out. This population of systems will then give a distribution across possibilities. The alternative is to imagine one system there for an exceeding length of time and consider dwell time in clusters of micro-states. Relative dwell time evaluates to relative likelihood. 9 --> As one extremum, all micro states can be equiprobable (Boltzmann's approach giving the S = k log W result) -- maximal uncertainty, and at the other, one state has unity probability and the rest nil, i.e. state is determined. Between we may consider all sorts of possible distributions. Bottomline is that the concept of searches as sampling the population of possibilities and as thus coming from the set of subsets of an underlying config space is reasonable. As on some description language this can be a bit string, consideration of a binary space from 000 . . . 0 to 111 . . . 1 is WLOG. Weight likelihood of a given config as appropriate, set your initial case and other constraints, allow a walk, of whatever character, filter for success based on configuration. Going back ten years and more, this has been the underlying framework in which I have viewed Dembski's thoughts and wider thoughts on FSCO/I. Kindly cf my always linked note through my handle. I would be utterly astonished to learn that this is not the background to whatever considerations are now being advanced. Indeed, I should note that while the term Island of Function, I picked up from GP, its root is in Dembski. Let me clip NFL, in closing:
p. 148:“The great myth of contemporary evolutionary biology is that the information needed to explain complex biological structures can be purchased without intelligence. My aim throughout this book is to dispel that myth . . . . Eigen and his colleagues must have something else in mind besides information simpliciter when they describe the origin of information as the central problem of biology. I submit that what they have in mind is specified complexity, or what equivalently we have been calling in this Chapter Complex Specified information or CSI . . . . Biological specification always refers to function. An organism is a functional system comprising many functional subsystems. . . . In virtue of their function [[a living organism's subsystems] embody patterns that are objectively given and can be identified independently of the systems that embody them. Hence these systems are specified in the sense required by the complexity-specificity criterion . . . the specification can be cashed out in any number of ways [[through observing the requisites of functional organisation within the cell, or in organs and tissues or at the level of the organism as a whole. Dembski cites: Wouters, p. 148: "globally in terms of the viability of whole organisms," Behe, p. 148: "minimal function of biochemical systems," Dawkins, pp. 148 - 9: "Complicated things have some quality, specifiable in advance, that is highly unlikely to have been acquired by ran-| dom chance alone. In the case of living things, the quality that is specified in advance is . . . the ability to propagate genes in reproduction." On p. 149, he roughly cites Orgel's famous remark from 1973, which exactly cited reads: In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity . . . And, p. 149, he highlights Paul Davis in The Fifth Miracle: "Living organisms are mysterious not for their complexity per se, but for their tightly specified complexity."] . . .” [--> Boiled down in biosystems we have functional filters tantamount to islands of organised function deeply isolated in seas of non function, imposing search challenge to get to shores of function before hill climbing can happen. Active info or oracles get you to a good enough close point that the odds shift in favour of getting there] p. 144: [[Specified complexity can be more formally defined:] “. . . since a universal probability bound of 1 [[chance] in 10^150 corresponds to a universal complexity bound of 500 bits of information, [[the cluster] (T, E) constitutes CSI because T [[ effectively the target hot zone in the field of possibilities] subsumes E [[ effectively the observed event from that field], T is detachable from E [[ is specified independent of simply picking it out], and and T measures at least 500 bits of information . . . [[T reflects that degree of complexity]”
KF kairosfocus
Your "definition" is irrelevant for this thread as i. it is not used in the book under discussion and ii. it is not used in DEM's earlier works. I don't see how your explanation "imposes a probability distribution over the search space". DiEb
Kindly, explain what is wrong or irrelevant with: "search is in effect a synonym for sampling, from flat random to extremely biased. the point is, search and space and in many cases initial point — think random walk — have to be well matched for success such as high function based on configuration to emerge. Search is thus inherently a complex topic . . . " KF PS: Search: Se_x = Sa_x (Space_y) | start place_z --> Success filter. Where, a particular search and a particular space must be given, also initial point, which can be forced or itself a prior random or intentional choice. kairosfocus
@KF: I did. That's why I posted #18. DiEb
DiEb, Please see my discussion above. KF kairosfocus
@KF, I have no intention to trivialize the subject: in Biological Information - New Perspectives, the authors took six pages to define the term "search". If something is so complicated, I find it often useful to look at first at simple examples. And I don't think that the complex definition works without paradoxical consequences, the most obvious being that a complete search of a finite search space (always a mathematical possibility, but for a small space of size 3 or 4 obviously doable) does on average over all "discriminators" work not better than a single guess. DiEb
DiEb, search is in effect a synonym for sampling, from flat random to extremely biased. the point is, search and space and in many cases initial point -- think random walk -- have to be well matched for success such as high function based on configuration to emerge. Search is thus inherently a complex topic, and to pretend that oh it can be made simplistic, is to fail to address reality. Further, search on material resources, i.e. search with resource and temporal costs [and monetary ones often] is constrained by that reality, so that scope of space to be searched is a highly relevant issue. I am sure you have seen decision analysis on whether anticipated cost of further investigation is likely to be worthwhile on reward likely to be obtained. This is a part of the bounded rationality problem in decision-making. One of the constraints here is likely scope of space of possibilities and available resources to search, esp if the search is by blind chance and/or mechanical necessity. A space of 4 bits is readily exhaustively searched, one of 500 bits is infeasible on the gamut of the sol system, and 1,000 on the gamut of the observed cosmos. And this is directly relevant to search for viable configs for origin of cell based life in a Darwin warm pond or the like. Boiled down, as a search is a sample, a search for a golden search comes from the power set of the original space, which is exponentially harder. So, direct first level search is the practical upper limit. On this, when we deal with complex configuration based function, blind search is maximally unlikely to access relevant function, but we know -- as your comment illustrates -- that design routinely and rapidly generates solutions. So, we can readlily see how a complexity threshold of 500 - 1,000 bits becomes a good test for design as most credible causal explanation. Indeed, of trillions of observed cases of such FSCO/I, never has there been an observation of such by blind chance and mechanical necessity. We routinely produce results by design that exceed the limit. So, no this is not something to be lightly dismissed. KF kairosfocus
@KF, a good definition for a mathematical term like "search" should work for a small space or a very large one without producing paradoxes. The definition of a search as proposed in “A General Theory of Information Cost Incurred by Successful Search” is already problematic for a shell game with three cards. I agree that it is fun to speculate about "sol system resources" and such, but that is not necessary in this case. DiEb
DiEb, the problem is it is not hard for a search space to become so large that reasonable search becomes impossible. Under those conditions, match of strategy, start-point, and specifics of space become important, especially if a space does not have the sort of convenient pointing slopes that some use to convey a misleading impression of the likelihood of searches. Where, 500 bits specifies 3.27*10^150 possibilities and 1000, 1.07*10^301. The first exhausts sol system resources and the latter, those of the observed cosmos. And of course, with a suitable description language, all searches come down to searches on binary spaces of suitable bit depth. KF kairosfocus
EricMH: " My search strategy determines the probability I will find said keys, and my search strategies tend to not have adequate active information." But if you perform a complete search over the search space with your keys in it, you will find your keys, won't you? And you are able to see that you have successfully performed your search yourself, without an independent agency telling you that those were the keys you were looking for.... DiEb
EricMH: “What is incorrect in the book?” E.g. p173,
We note, however, the choice of an [search] algorithm along with its parameters and initialization imposes a probability distribution over the search space.
They have not shown this yet. The definitions they used in their previous papers lead to paradoxical results. DiEb
EricMH: "What is incorrect in the book?" E.g. p77,
The performance of proportional betting is akin to that of a search algorithm. For proportional betting, you want to extract the maximum amount of money from the game in a single bet. In search, you wish to extract the maximum amount of information in a single query. The mathematics is identical.
On the pages before, they have described how proportional betting is a strategy which works in the long run, i.e., if you are allowed to reuse your capital in a string of bets. For a single bet, it isn't generally the best strategy, so their analogy fails. DiEb
EricMH, The politely dissenting interlocutors don't have any valid argument in this discussion. They're just barking up the wrong trees. Let's be gracious to them. They don't know what they're talking about. Dionisio
To Whom This May Concern Please, note that until the evo-devo literature shows macro-evolutionary cases of biological systems (ca,d1,d2) that rigorously meet the formulation described @1090 in the thread “A third way of evolution?”, any discussion on related topics is pure speculation. Archaic pseudoscientific hogwash shouldn’t be part of any serious explanation. Dionisio
Where is the information that is used by the biological systems in order to determine the localization of the morphogen sources? Where is the information that is used by the biological systems in order to determine the morphogen secretion timing and rate at the sources? Dionisio
@DiEb Searches fail all the time. Like all the times I've lost my keys and never find them again despite prolonged search. My search strategy determines the probability I will find said keys, and my search strategies tend to not have adequate active information. What is incorrect in the book? That's the more interesting question. EricMH
@EricMH #5 - I don't think the the book is correct, it is just to superficial to be so. Take, e.g., the concept of a complete search of a finite target space. Any sensible definition of a search should lead to the conclusion that at the end of a complete search the target has been identified with probability 1. Don't you agree? The definitions of DEM don't work that way. The last official one which they proposed in 2013 in their paper "A General Theory of Information Cost Incurred by Successful Search" has as a result that a complete search works on average over all applicable search spaces not better than a single guess. DEM are well aware of this paradox. This may have contributed to the omission of a definition for their central concept of a search... DiEb
@EricMH #4 - for me, this non-controversial comp. sci. statement shows that DEM are preaching to the choir. And like the Latin speaking priests of the Middle Ages they do so in a language which is barely understood by their congregation. DiEb
Look, Marks' book is correct. There is nothing in his book that is wrong from a engineering and comp. sci. perspective. The only thing you can take issue with is whether the models are good representations of evolution. But, the models themselves are proposed by Darwinists as being good representations. All Marks & co. do is show that the models cannot produce information. It is Darwinists making the claim and Marks showing the claim is false, and he succeeds. EricMH
It's funny you take issue with a non-controversial comp. sci. statement. EricMH
The nature of this book allows the authors to skip over all the problems of their ideas and omit difficult definitions: while they talk about "searches" for dozens and dozens of pages, they never define what a "search" is. One of the most problematic sentences is on page 173: "We note, however, the choice of an [search] algorithm along with its parameters and initialization imposes a probability distribution over the search space". Does it really? They authors have tried to show this in a couple of ways in various papers, and each of their approaches seemed to be ridden with further problems. So, they just side-step this crucial bit of their theory. DiEb
It's telling that the authors expect their readers to know important verses of the Bible by heart ("Secondly we believe a la Romans 1:20 and like verses that the implications of this work in the apologetics of perception of meaning are profound"), but that they have not heard of the most common technical terms ("JPG: pronounced JAY-peg"). DiEb
Some familiar names there. Bob O'H

Leave a Reply