Uncommon Descent Serving The Intelligent Design Community

So the Evolutionary Informatics Lab has been “debunked” … by no one worth listening to?

arroba Email

Recently, one of our worthy dudgeoneers wrote to say that the world biosphere sanctuary for Darwin trolls has featured posts claiming that the work of Bob Marks’ Evolutionary Informatics Lab has been “debunked,” providing the following as evidence, linking to a technical rebuttal by “Metropolis Sampler – a biased sampling of the world.”

With respect, but … so? What technical paper debunked it? As opposed to, say, an asshat online who was mad about it years ago?  Mark’s work is in a peer reviewed technical journal.  Do the Darwinists expect us to give equal credence to an anonymous tendentious internet troll?

Darwin’s math does not add up and Darwinists have evaded that fact for 150 years. Darwinism lives on court decisions, bureaucrat rules, and unjust dismissals, not on evidence that natural selection acting on random mutation creates the world of life we see around us.

That was Darwin’s original claim and it is either true or it is not. And we don’t find any convincing evidence that it is true.

While we are here, what’s with all this Euro-garbage about North Americans not being interested in science? Who did the space shuttle and the Canadarm? Or the CANDU reactor?

We North Americans haven’t been brought up to straight-arm salute obvious falsehoods,  just to get along. That may be somebody’s problem, but it isn’t necessarily ours. Which explains why we tend not to believe Darwinism.

Just sayin’ is all.

Follow UD News at Twitter!

Sorry, something went wrong with the last link, I try it again: ...frustration of Tom English... DiEb
Thanks Rob! But this desirable outcome is highly doubtful: as I said, my proof lacks the elegance and sophistication of the one in the paper - it has just the one little advantage to be correct.... The switch from "I" to an EIL-including "WE" in the last sentence of Winston Ewert's comment seems to indicate a change of heart: Winston, will you limit your interest to peer-reviewed critic, too? Or have you read my little essay? Did you find any faults? If not, you are another one at the EIL to know that the HFLT is problematic (to say the least). To ignore the problem until someone bothers to mention it in a peer-reviewed paper isn't something I would expect of one of the twenty Most Brilliant Christian Professors! Really, I understand the frustration of Tom English when trying to deal with the EIL. DiEb
Winston, Thanks for your reply. If you think that published responses would get the attention of Dembski and Marks, then hopefully someone will find the time to publish a response. Dieb, Excellent. Perhaps if Dembski amd Marks decide to correct their Horizontal NFLT mistakes, they can just replace their proof with yours. R0bb
Firstly, my reasoning for believing that they misdefined active information: 1. The paper's initial sentence seems to present it as a response to/critique of Conservation of Information in Search: Measuring the Cost of Success. This paper spends its time developing the notion of active information. 2. The paper lists several EIL papers as references. All of these papers deal with active information. 3. The paper uses a formula for a special case of active information which was in the paper "Conservation of Information In Search" 4. The end of the paper implies we try to judge model assumptions based on a "a numerical measure of how many logical possibilities that are ruled out or how far probability distributions deviate from uniform measures" These factors led me to believe that the paper was attempting to use active information even if failing to call it that. What you are arguing is that the paper isn't trying to use active information, but rather a completely different information metric. I admit, I had not considered this possibility. Secondly, my objection was that the example given is clearly not an instance of active information, because active information is the comparison of two searches. Since there are no searches being compared it is not a valid example of active information. If the author did not intend to use active information, then the example is merely irrelevant to question of active information. Thirdly, the objection rests on this search space reduction model of information. But the EIL papers do not make use of this model of information. We've presented our model based on active information, and to qualify as a critique of our papers, it would need to look at our model not some other model. This is not to say that a search space reduction model is invalid. It is basically Shannon's model of information. I am nowhere near arrogant enough to say that Shannon was wrong. But it is not the model of information that the EIL pursues in our published work. There is an important difference between the models. Its easy to decrease the entropy of a distribution. Thus we can easily gain information in a search space reduction model. However, you cannot so easily produce active information. That's why we believe that active information is an interesting quantity. Fourthly, the paper describes our method as evaluating models based on the number of possibilities not chosen. But that's not what we do. We did not criticize Avida, Weasel, or Ev for having a large amount of active information. We criticized those simulations for gaining active information by exploiting knowledge of the problem being solved. Fifthly, I'm glad we agree on something. This is a poor venue, and we look forward to seeing formal published articles critiquing our work. It is strange to us that we haven't seen formal published responses. There are various responses in the literature to Dembski's previous books such as No Free Lunch. Somehow our published articles have not provoked the same response. WinstonEwert
Winston, I'm still hoping you can point out the passages in the paper that give you the impression that the author thinks that size ratios are your entire method, unless you've dropped that objection.
Either it is intended to be active information, in which case they’ve gotten it completely wrong, or its not intended to be active information in which case they are attacking us for a definition of information we did not use.
I don't see where he's attacking the EIL for a definition of information. He's adopting the EIL's practice of referring to a reduction of possibilities as information, which is consistent with Dembski's usage of the term for over a decade. As Dembski said in a recent interview: "Whenever I set the groundwork for information in a discussion of ID, I make clear that information happens when there is a reduction of possibilities." The reduction of search space possibilities is one example of this. Your new objection seems to be that the reduction of possible fitness functions is not a valid example. (And I think we all understand that elimination of possibilities is simply a special case of a more general concept of information, which entails bias, i.e. the shifting of probability mass to reduce entropy. Active information is bias toward a target.) The author's point is that the EIL's model assumptions can be measured using the same principles that the EIL uses to measure other people's model assumptions. The same point could have been made using the term "size ratio" rather than "information". With regards to venues for critiques of the EIL, you're right that UD comments aren't the best place for them. But there is no lack of blog posts and emails pointing out alleged problems, as well as a few semi-formal PDFs, all of which are mostly met with silence other than an occasional erratum. So I try to take advantage of the opportunity to talk to EIL members publicly when they show up in this forum. And it's worth noting that some alleged problems are too fundamental to be addressed with errata. Ironically, both Marks and Dembski have commented on what they see as lack of response from critics. They may be referring to formal published articles, in which case they're correct that the EIL critics definitely fall short. Writing such papers would require someone with enough time, talent, and interest, and while I meet the third requirement, I don't meet the first two. That's something that we need to work on, and I appeal to people like Dieb and Tom English to compensate for my shortcomings. R0bb
Now, the yardstick algor is a random walk search, and given that more effective search algors depend on being matched to the specific space in view, it is indeed reasonable for M & D to have concluded that on average, if there is no intelligent matching, a random pick of search algor from the space of possible algors makes the odds typically worse than just going for a random search. The wrong algor could actually lead you away from targets!
@KairosFocus: I assume that you mean the remark to the Horizontal No Free Lunch Theorem in the paper "The Search for a Search: Measuring the Information Cost of Higher Level Search". Needless to say that this theorem isn't seen as valid by many - you can see my quite recent and very elementary take here. BTW: there is a difference between a random walk search and the random search as described by Marks, Dembski et al. @WinstonEwert: I hope I can interest you in the paper-style pdf as linked above. I'd like to get your input - or the input of your colleagues. DiEb
Robb, The focus of that paper is to show that the powerset of possible targets is much smaller then the set of possible fitness functions. It takes the formula, and shows a huge amount of information in that restriction. But its not active information, because active information compares the performance of search algorithms. A random search in either space (with targets or with a fitness function) has the same probability of success. Either it is intended to be active information, in which case they've gotten it completely wrong, or its not intended to be active information in which case they are attacking us for a definition of information we did not use. Thanks for you listing of points. I'm aware of most of what you bring up there. But its useful to see what you apparently consider significant points. I'm not going to actually engage them here because they deserve more thought than I'll give them if I do that. I've copied them into my file of critiques. But I'd suggest that you not attempt to post critiques in the comments of uncommon descent. As far as I know, nobody at the EIL makes a practice of reading through these comments and your points will probably be completely missed. For typographical errors and minor mathematical mistakes, please just contact me by email directly and we'll post corrections giving you credit. For more major critiques, I'd suggest writing them up as blog posts or paper-style pdfs. Let me know where they are, and I'll read them. WinstonEwert
However, the sampler post doesn’t treat it as just one example, but makes it look like that’s our entire method.
I must be missing something, because I don't see anything like that in the article. The only relevant sentence that I see is this: "Adopting their terminology, the logarithm of the ratio |F|/|F_T| may be termed information and taken to measure how strong a priori assumptions Dembski and Marks rely on in their discussions of search problems." [Italics in original] Note that F is the set of all fitness functions V^Ω, not the search space, so I don't know if active information would be the appropriate term. Second, I don't see how calling a ratio of sizes "information" implies that only a ratio of sizes can be information, any more than calling a dog a mammal implies that only a dog can be a mammal. Maybe you can point out the passages in the paper that give you the impression that the author thinks that size ratios are your entire method.
I don’t like being wrong, and if I’m wrong I’d like to stop being wrong as quickly as possible.
That's an admirable quality, and I don't doubt that you have it. You realize, I assume, that several people have pointed out several alleged problems with the EIL papers and framework. In free-for-all venues like blogs, it's inevitable that some of the criticisms will be ill-founded, but I think it's worthwhile to read through them to find those that are valid. These alleged problems include mistakes in mathematical analyses, ranging from minor to egregious. They also include problems with the conceptual framework. For instance: - Active information is a function of |Ω|, but how do we non-arbitrarily define Ω when modeling real-world processes? If our definition is based on empirical considerations, then that knowledge constitutes active information that reduces the search space from a larger space of logical possibilities. For example, when modeling the roll of a fair six-sided die where the target outcome is the number "6", we would tend to define the search space as {1, 2, 3, 4, 5, 6}, but that definition is based on our knowledge that those are the only outcomes with non-zero probability. If we try to eliminate this "familiarity zone" of prior knowledge by defining Ω as the set of all integers (remember that "The 'no prior knowledge' cited in Bernoulli’s PrOIR is all or nothing: we have prior knowledge about the search or we don’t") , then this process has infinite active information. So active information is an arbitrary measure, as it depends on our choice of what to count as prior information and what not to. - Speaking of targets, the EIL has never told us how to distinguish an intrinsic target from a non-target. - Contrary to Dembski and Marks' claim, the LCI is not a universal law. Finding counterexamples is trivially easy -- models as simple as one-dimensional random walks violate it. As Atom has pointed out, Dembski and Marks' CoI proofs work because they're restricted to cases in which a certain condition holds true, namely that the average of the lower-level distributions is a uniform distribution. (If we represent probabilistic hierarchies in terms of probability vectors and transition matrices, as we would other stochastic processes, this condition says that the transition matrix must be unbiased. That is, assuming the matrix is left-stochastic, all rows must have the same sum.) - But if we impose the condition mentioned above, then the LCI amounts to nothing more than the probabilistic truism P(A&B) ≤ P(B). I can show this if you'd like. This is just scratching the surface. Many challenges to the EIL have been issued in this forum. (See here for some that are recent, and pertain to your work with ev and Avida.) R0bb
Robb, As you point out, the Brillouin active information is the formula used in the mentioned post. But that's described in the paper as a particular example showing how active information applies. It is not intended as a general purpose formula for calculating active information. Of course, you can use it, but you have to make sure the same reasoning applies. However, the sampler post doesn't treat it as just one example, but makes it look like that's our entire method. That's why I characterize it as failing to define active information correctly. They don't call it active information, but they take one particular example, and present that as if our only concern was the size of search spaces. DieB, Most of papers are focused on the idea of active information, and so it would be strange to attempt to dispute them without mentioning active information. Regardless, this particular paper appears to be trying to use active information without calling it, uses the wrong formula, and criticizes us for things we'd never say as a result. As for the actual point you raise, I see your point. It is not immediately obvious how external information could fit into the model used in that paper. I'm afraid that all I can do on that at the moment is ask for your patience. Anyone, I'm not seeking to hide from any critiques of our work. I don't like being wrong, and if I'm wrong I'd like to stop being wrong as quickly as possible. I don't generally engage in comment threads or form postings as I don't find them productive venues. But if you have a thought-out written response or critique, feel free to send it to me at evoinfo AT winstonewert DOT com. I can't promise anything in terms of a response, but I will read and consider anything sent. WinstonEwert
R0bb - It's great to see you back on UD! FYI - I have worked on the problem a little further, and I think I have a methodology for calculating Active Information in biological systems generally, but it will probably be a bit until it is ready to show everyone. Also, unfortunately, I need an experimenter to actually perform the experiments and measure it. There is no pre-existing data set that I am aware of that I can go and use to perform the new calculations. johnnyb
Johnny Cash - Ain't No Grave [Official HD] - The Johnny Cash Project - song starts around 3:00 minute mark http://www.youtube.com/watch?v=WwNVlNt9iDk
I've always found it strange that Darwinists would use intelligently designed Evolutionary Algorithms to prove to us 'IDiots' that you don't need intelligent design to generate functional information.
Signature In The Cell - Review Excerpt: There is absolutely nothing surprising about the results of these (evolutionary) algorithms. The computer is programmed from the outset to converge on the solution. The programmer designed to do that. What would be surprising is if the program didn't converge on the solution. That would reflect badly on the skill of the programmer. Everything interesting in the output of the program came as a result of the programmer's skill-the information input. There are no mysterious outputs. Software Engineer - as quoted to Stephen Meyer http://www.scribd.com/full/29346507?access_key=key-1ysrgwzxhb18zn6dtju0 Refutation Of Evolutionary Algorithms https://docs.google.com/document/pub?id=1h33EC4yg29Ve59XYJN_nJoipZLKIgupT6lBtsaVQsUs
Wouldn't their case be far more convincing, to those of us who are not blindly committed to materialistic answers beforehand, if they were to use purely material processes to generate functional information in the first place?
The Capabilities of Chaos and Complexity - David L. Abel Excerpt: "To stem the growing swell of Intelligent Design intrusions, it is imperative that we provide stand-alone natural process evidence of non trivial self-organization at the edge of chaos. We must demonstrate on sound scientific grounds the formal capabilities of naturally-occurring physicodynamic complexity. Evolutionary algorithms, for example, must be stripped of all artificial selection and the purposeful steering of iterations toward desired products. The latter intrusions into natural process clearly violate sound evolution theory." http://www.mdpi.com/1422-0067/10/1/247/pdf
Moreover, in stark contrast to the neo-Darwinian presupposition that functional information 'emerges' from a purely material bases, it is found that the information we have in computers is actually a subset of 'non-local' quantum information:
Quantum knowledge cools computers: New understanding of entropy – June 2011 Excerpt: No heat, even a cooling effect; In the case of perfect classical knowledge of a computer memory (zero entropy), deletion of the data requires in theory no energy at all. The researchers prove that “more than complete knowledge” from quantum entanglement with the memory (negative entropy) leads to deletion of the data being accompanied by removal of heat from the computer and its release as usable energy. This is the physical meaning of negative entropy. Renner emphasizes, however, “This doesn’t mean that we can develop a perpetual motion machine.” The data can only be deleted once, so there is no possibility to continue to generate energy. The process also destroys the entanglement, and it would take an input of energy to reset the system to its starting state. The equations are consistent with what’s known as the second law of thermodynamics: the idea that the entropy of the universe can never decrease. Vedral says “We’re working on the edge of the second law. If you go any further, you will break it.” http://www.sciencedaily.com/releases/2011/06/110601134300.htm
,,,And to dot the i’s, and cross the t’s, here is the empirical confirmation that quantum information is in fact ‘conserved’;,,,
Quantum no-hiding theorem experimentally confirmed for first time Excerpt: In the classical world, information can be copied and deleted at will. In the quantum world, however, the conservation of quantum information means that information cannot be created nor destroyed. This concept stems from two fundamental theorems of quantum mechanics: the no-cloning theorem and the no-deleting theorem. A third and related theorem, called the no-hiding theorem, addresses information loss in the quantum world. According to the no-hiding theorem, if information is missing from one system (which may happen when the system interacts with the environment), then the information is simply residing somewhere else in the Universe; in other words, the missing information cannot be hidden in the correlations between a system and its environment. http://www.physorg.com/news/2011-03-quantum-no-hiding-theorem-experimentally.html
Further notes:
Falsification Of Neo-Darwinism by Quantum Entanglement/Information https://docs.google.com/document/d/1p8AQgqFqiRQwyaF8t1_CKTPQ9duN8FHU9-pV4oBDOVs/edit?hl=en_US Does Quantum Biology Support A Quantum Soul? – Stuart Hameroff - video (notes in description) http://vimeo.com/29895068
Music and Verse:
Johnny Cash - Ain't No Grave http://www.youtube.com/watch?v=o0MIFHLIzZY Job 19:26-27 And after my skin has been destroyed, yet in my flesh I will see God; I myself will see him with my own eyes--I, and not another. How my heart yearns within me!
PS: For those needing more on Weasel, cf here on. It is unfortunate that Weasel was allowed to mislead ever so many people over the years. kairosfocus
H'mm: Took a glance at the PT hit piece -- anything that resorts to "creationists in a cheap tuxedo" type namecalling is a hit piece. First thing jumps out is it fails to acknowledge that the Weasel case is targetted search, and that that is crucial to how it outperforms blind search. Never mind Dawkins' admission that it is "misleading." That is a measure of the quality of thinking we are dealing with here. The more technical review referenced, says at a crucial point:
Assumptions are a necessary ingredient in any mathematical model or calculation. Dembski and Marks are as guilty of making model assumptions as anyone else.
The issue is not whether a model has assumptions or other inputs, but what they are and how warranted they are. In particular, if you end up feeding in targetting info, it is appropriate to infer from how that allows you to outperform a strict random walk, that there is a metric of info there, and that they define as active info. And, probabilistic metrics of info are a longstanding and well accepted practice, often with logs used to give additivity. Where, the core idea behind active info can be summarised:
Conservation of information theorems indicate that any search algorithm performs on average as well as random search without replacement unless it takes advantage of problem-specific information about the search target or the search-space structure. Combinatorics shows that even a moderately sized search requires problem-specific information to be successful. Three measures to characterize the information required for successful search are (1) endogenous information, which measures the difficulty of finding a target using random search; (2) exogenous information, which measures the difficulty that remains in finding a target once a search takes advantage of problem-specific information; and (3) active information, which, as the difference between endogenous and exogenous information, measures the contribution of problem-specific information for successfully finding a target. This paper develops a methodology based on these information measures to gauge the effectiveness with which problem-specific information facilitates successful search. It then applies this methodology to various search tools widely used in evolutionary search.
The pivotal concept here is that: Combinatorics shows that even a moderately sized search requires problem-specific information to be successful. Basic to this, is that a config space is exponentially explosive on the number of bits used to specify its members. using bits, Number of possibilities is 2^n, for n bits, i.e. the space doubles for each additional bit. Consequently, the resources of our solar system are unable to reasonably successfully carry out a random search for a config space for just 500 bits, for any reasonably isolated and separately definable target. Up this to 1,000 bits and we swamp the resources of the observed cosmos. Per sampling theory, such a search can only credibly pick up what is typical, where the sort of islands of function in view in the context of analysis are just that: highly specific and isolated, thus atypical. As I have often spoken, for 500 bits, a search using our solar system's 10^57 atoms running for its lifespan to date on a typical timeline could not check more than 1 in 10^48 of the space, comparable to drawing a one straw sized sample of a cubical haystack 3 1/2 light days across. Even if our solar system out to Pluto was in it, with all but certainty, such a sample on sampling theory would pick what is typical: straw. To expect to find something other than straw on any reasonable gamut of resources, you would need an intelligent search. In short, the basic issue is not that hard to see. So, if someone is unwilling to squarely face this issue, we can confidently conclude that what is going on is not a question of reasonable analysis, but an exercise in ideologically motivated obfuscation. Which is also what namecalling tactics point to. Now, the yardstick algor is a random walk search, and given that more effective search algors depend on being matched to the specific space in view, it is indeed reasonable for M & D to have concluded that on average, if there is no intelligent matching, a random pick of search algor from the space of possible algors makes the odds typically worse than just going for a random search. The wrong algor could actually lead you away from targets! As a result it is inherently highly reasonable to deduce a metric of injected "active" info from the over-performance of an algor relative to random search in a complex enough space to swamp random search resources. The specific cases which are examined at Evo Informatics, simply underscore this fairly obvious point. So, when I see dismissal arguments and namecalling, that simply tells me that we are here up against hatchet jobs in service to ideological a prioris, not reasonable criticism. GEM of TKI kairosfocus
WinstonEwert, Dembski and Marks use the term Brillouin active information to describe the reduction of a search space size from |Ω| to |Ω'|. So this type of active information is based on the size of search spaces, and it's the only type of active information that I can recall being applied to real-world scenarios, e.g. searching for treasure in Bora Bora, boiling an egg, and johnnyb's somatic hypermutation example. The Metropolis Sampler author applies the same principle to the space of objective functions. By assuming a binary success criterion, Dembski and Marks dramatically reduce the size of this space. The point is that Dembski and Marks make model assumptions like everyone else, and it shouldn't matter how much those assumptions shrink a space from a larger space of strictly logical possibilities. What should matter is whether the assumptions are appropriate for what is being modeled. Whether you agree with this point or not (and I think there are better ways to illustrate it), I see no failure "to use the correct definition for active information". The term "active information" isn't even used in the article. News, I have no idea what dungeoneer you're referring to, but obviously it makes no sense to speak of debunking a lab. Maybe you could share this dungeoneer's email with us, with permission of course. And it's interesting that you put so much stock in peer review. Would you say that you're consistent in that regard? R0bb
News: "What technical paper debunked it? As opposed to, say, an asshat online who was mad about it years ago? Mark’s work is in a peer reviewed technical journal. Do the Darwinists expect us to give equal credence to an anonymous tendentious internet troll?"
When Deolalikar came up with his paper on P != NP, it was discussed (and debunked) in various blogs. Tom English and Joe Felsenstein answered to the papers of Bob Marks and William Dembski, and they are most certainly no tendentious internet trolls.
Winston Ewert: Actually, the astounding thing is that the alleged debunking has no relation to what was actually done in the papers.
It is possible to find problems within the papers without referring to the definition of active information. One of my points of critique is that you give many examples of directed search, oracles, etc., especially in Conservation of Information in Search and Efficient Per Query Information Extraction from a Hamming Oracle - but when it comes to laying down the theory in The Search for a Search, a model for a search is presented which doesn't allow for the input of a fitness-function, an oracle, or something else: even an assisted search is described as a probability measure on the augmented search space, and is therefore independent of any further input as provided by a fitness-function etc. To elaborate: choosing ω_n depends only on the previously chosen ω_1 ... ω_n-1, but not on the values of the fitness function f(ω_1), ... f(ω_n-1) (or the output of the oracle for this elements) IMO, a blog is the ideal place to clarify such a point, in fact, I'd be grateful if Winston Ewert provides me with some insight... DiEb
Its my expectation and satisfaction that North America which is the greater origin for the important advances in human knowledge (called science) are the most creationist ish. It would only be this way if evolutionism etc was wrong. The sharper chaps would most likely be least persuaded be error and more persuaded by criticisms of error. If we are the most intelligent people in history then it would follow logically that we be the least prone to error in subjects dealing with contentions based on evidence. A line of reasoning. Robert Byers
Actually, the astounding thing is that the alleged debunking has no relation to what was actually done in the papers. It defines something like active information but bases it not on the probability of success for the algorithm as we do, but on the size of the search spaces. Everything the entire paper says derives from having failed to even use the correct definition for active information. Sad... So sad. WinstonEwert

Leave a Reply