Uncommon Descent Serving The Intelligent Design Community

“Conservation of Information Made Simple” at ENV

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
arroba Email

Evolution News & Views just posted a long article I wrote on conservation of information.

EXCERPT: “In this article, I’m going to follow the example of these books, laying out as simply and clearly as I can what conservation of information is and why it poses a challenge to conventional evolutionary thinking. I’ll break this concept down so that it seems natural and straightforward. Right now, it’s too easy for critics of intelligent design to say, ‘Oh, that conservation of information stuff is just mumbo-jumbo. It’s part of the ID agenda to make a gullible public think there’s some science backing ID when it’s really all smoke and mirrors.’ Conservation of information is not a difficult concept and once it is understood, it becomes clear that evolutionary processes cannot create the information required to power biological evolution.” MORE

TEASER: The article quotes some interesting email correspondence that I had with Richard Dawkins and with Simon Conway Morris, now going back about a decade, but still highly relevant.

Comments
R0bb, The misunderstanding s are all yours. Unsearchable spaces are not included in Ω, period, end of story. That you refuse to grasp that simple fact tells me that you have other issues. As for the directions in the machine/ dice example- Dembsji says:
But then a troubling question crosses your mind: Where did this machine that raises your probability of success come from?
That means you need to find the machine/ machines, R0bb. Joe
Joe:
So ?, in your example, would be whatever two spaces are amendable to be searched from where the start is- which is something you have never defined.
Like all search-for-a-search scenarios, the random walk example is a chain of two random variables. The state at time t is never defined because it's a random variable. You won't tell me whether you understand what the term random variable means, so I don't know if you understand what I just said. But I'm repeating what I said before, and your question indicates that you didn't understand it the first time, so I doubt that you understand it now.
We were NOT talking about that, were we? Talk about being dishonest, geez.
No, we were talking about a different example that, like the dice/machines example, doesn't involve "directions". Since you added these imaginary directions to the example in the S4S paper, I figured you would do the same for the dice/machines example. And you're nothing if not consistent:
But anyway the directions to ignore would be where to find the machines that are up to the task- I went over that already too.
Again, there are no such directions in that example. You made them up. I can guess at the source of your misunderstanding of Dembski's framework. I can't say for sure, since you won't answer my questions, but I'll share my guess with you when I get time to write another comment. R0bb
Unsearchable spaces are not included in Ω, period, end of story. So Ω, in your example, would be whatever two spaces are amendable to be searched from where the start is- which is something you have never defined.
Also, I recommend that you read the article that the OP is about. And ask yourself how you reconcile the dice/machine example with your interpretation of “active information” as “directions” which the searcher is free to ignore.
We were NOT talking about that, were we? Talk about being dishonest, geez. But anyway the directions to ignore would be where to find the machines that are up to the task- I went over that already too. Joe
Joe, Okay, so you're back to your assertion that |Ω| = 2 instead of 5. Apparently the fact that the LCI holds if |Ω| = 2 is indicating to you that you're correct. Of course, the LCI also holds if |Ω| = .001, but that would be nonsense. I define Ω as {"n-2", "n-1", "n", "n+1", "n+2"}, using the labels from random walk model definition. What do you think Ω should be? That is, can you fill in the blanks: Ω = {___, ___} Remember that Ω is part of the model definition. It doesn't change according to which low-level search is realized in the high-level search. Also, I recommend that you read the article that the OP is about. And ask yourself how you reconcile the dice/machine example with your interpretation of "active information" as "directions" which the searcher is free to ignore. R0bb
R0bb, Any space that does not have the target has a zero probability outcome. The equations YOU used- Dembski and Marks' equations. Unsearchable spaces are not included in ?, period, end of story. Deal with that. Joe
R0bb:
Can zero-probability outcomes occur?
Joe:
Already answered that question, R0bb.
Sorry, I must have missed it. Which comment is your answer in?
Also the fact that when I plug in the values I say are correct, the equations actually work, tells me I am right.
Excellent! Let's see these equations. R0bb
Already answered that question, R0bb. Also the fact that when I plug in the values I say are correct, the equations actually work, tells me I am right. Unsearchable spaces are not included in Ω, period, end of story. Joe
Well then you don't understand a damn thing.
Then why is it that I'm I addressing with specificity each of the issues that you bring up, while you avoid almost all of my questions like they have cooties? It's astounding that you're unwilling to go out on a limb and answer as simple a question as "Can zero-probability outcomes occur?" It's not a trick question. R0bb
R0bb:
And yet I just pointed out specifically where they do.
Only in your mind. Umm the directions are the active information which tells you where NOT to look.
No.
Well then you don't understand a damn thing. Joe
Joe:
No, R0bb, Dembski and Marks do NOT do what you say.
And yet I just pointed out specifically where they do. And I showed that you do too. Are you going to address this, or simply keep repeating your claim?
Umm the directions are the active information which tells you where NOT to look.
No. There is no "you" doing the "searching" in that example. There is only "we" analyzing the math, but we're not doing the "searching". Read the example and you'll see that this is the case. Even though I've pointed it out repeatedly, you still haven't grokked the fact that in Dembski's framework, a "search" is a random variable. Active information is bias in this random variable. You're still acting as if Dembski is using the terms "search" and "information" in their conventional senses. Can zero-probability outcomes occur? That's a ridiculously simple yes/no question. R0bb
No, R0bb, Dembski and Marks do NOT do what you say. As I told you already, YES, that space is open to a search by an imbecile who cannot follow directions.
There are no directions in that example
Umm the directions are the active information which tells you where NOT to look. But anyway wallow in your strawman. Joe
So you’re saying that Dembski and Marks are wrong when they define ? such that it includes outcomes that are inaccessible to the alternate search?
They do not do that.
Of course they do, in examples and in theorems. In each of their three CoI theorems, there are alternate searches for which all outcomes in Ω, except for one, have a probability of zero. They even have a name, "Brillouin active information", for the increase in performance resulting from a subset of Ω being inaccessible to the alternate search. And you do it too. In the random walk example, there are three alternate searches, and every outcome in Ω has a probability of zero in at least one of those alternate searches. So why do you say that |Ω| is 2, when your own claims dictate that it should be 0?
As I told you already, YES, that space is open to a search by an imbecile who cannot follow directions.
There are no directions in that example, nor is there a person who could do something imbecilic. There are only well-defined random variables. Θ_2 confers a probability of zero on square 1.1. Zero-probability outcomes cannot occur. Please, please, tell me that you understand this fact. R0bb
So you’re saying that Dembski and Marks are wrong when they define ? such that it includes outcomes that are inaccessible to the alternate search?
They do not do that.
Is square 1.1 accessible to the alternate search in the S4S example that we discussed at length?
As I told you already, YES, that space is open to a search by an imbecile who cannot follow directions. And Robb, you don't have to tell me what you don't understand, it is obvious. Joe
If it cannot be searched it is not included, period.
So you're saying that Dembski and Marks are wrong when they define Ω such that it includes outcomes that are inaccessible to the alternate search? Is square 1.1 accessible to the alternate search in the S4S example that we discussed at length? That's a yes/no question -- you can answer it in 2 or 3 keystrokes.
Every outcome is either part of the target or it isn’t.
Every outcome is searchable/ accessible at the same time or it is not part of ? and therefor not part of the equation.
What does that have to do with the probability of an outcome being part of the target? What equation are you talking about? Because you refuse to tell me what you've read and what you already understand (or don't understand), I have no way of knowing where to start when I talk to you. If you haven't read the article referenced in the OP or other Evo Info Lab papers, we have to start with the basic concepts. But if you've read them, then we have a foundation of shared terminology and concepts. It seems that we don't have that foundation, which is why I keep asking you what you've read and what you do or don't understand. Your refusal to answer dooms this conversation, which means that you've launched a series of criticisms, accusations, and insults at me and then made it impossible to discuss them. Why are you doing this? R0bb
R0bb:
You said that ? should be 2 instead of 5.
Ω is whatever is searchable for the target you are looking for.
2 is the number of states accessible to the alternate search.
Doesn't matter. If it cannot be searched it is not included, period. And that is why we do NOT include the numbers above 6 when considering the probabilities of a roll of a die Also as I have already told you in your example the target get be had in the fisrt search, and that is not so with Dembski and Marks examples. They are talking about one search that will make the second search easier.
Who said anything about the “probability of containing the target”?
I did.
Every outcome is either part of the target or it isn’t.
Every outcome is searchable/ accessible at the same time or it is not part of Ω and therefor not part of the equation. Joe
Joe:
? is the searchable space, meaning every position within ? can be searched.
You said that Ω should be 2 instead of 5. 2 is the number of states accessible to the alternate search. But Ω can contain outcomes that are inaccessible to the alternate search. I've pointed out several examples of this. We could go through them again if you'd like.
? does contain areas that have a zero probability of containing the target
Who said anything about the "probability of containing the target"? How does that even make sense? Every outcome is either part of the target or it isn't. R0bb
R0bb, Ω is the searchable space, meaning every position within Ω can be searched. The searchable area is Ω- and only the searchable area. If it ain't in the seachable area it is not in Ω. And if you cannot find a space that means it is not in the searchable area and is not included in Ω Ω does contain areas that have a zero probability of containing the target, yet still have some probability, ie NOT zero, of being searched. I told you that in the beginning and it still stands. IOW you don't have any challenge for me- well maybe the challenge is just reading your tripe. And your tripe is an insult to IDists, especially Dembski and Marks. Joe
Joe, it's you, not Dembski, that I'm challenging right now. You said I "smooched the pooch" for including inaccessible outcomes in Ω. Then you reversed your position, saying that Ω can contain inaccessible outcomes. Then you apparently reversed it back again, referring me back to comments 24-27. Throughout this, you never acknowledged changing your position, nor did you ever retract the "smooched the pooch" insult. You have called me a liar, told me that I seem to be good at humping strawmen, and favored me with a whole host of other insults and accusations. But when it comes to defending your criticisms, you avoid answering even the simplest of questions. Are you going to defend your criticisms, accusations, and insults, or not? R0bb
@JWTruthInLove, Any onlookers who think that don't know anything anyway. So why should I care about know-nothings? I told R0bb what was wrong with his examples and he can't take it. I am done- if Dembski wants to chime in to protect his work- if he thinks it is threatened- then let him do it. Why am I in charge of protecting Dembski? Joe
@Joe: To onlookers it looks like you have some issues you have to resolve... Like developing the ability to answer questions. JWTruthInLove
R0bb, Obvioulsy you have issues that you need to resolve. I will leave you to take care of that and will join Dembski in giving you the "response" you deserve. Joe
Your random walk search in no way exemplifies anything Dembski and Marks were talking about.
Which of the following statements do you disagree with? 1) In Dembski and Marks' LCI framework, a "search" is nothing more than a random variable. 2) The LCI says that the active information of a lower-level search is never higher than the endogenous information of the higher-level search. 3) A chain of two searches is nothing more than a chain of two random variables. 4) So the LCI applies to chains of two random variables. 5) The random walk example is a chain of two random variables. 6) So the LCI applies to the random walk example.
Any space without the target = an impossible outcome, duh.
It looks like we need to step way back. Do you know what the probability theoretic term outcome means? Do you understand the concepts of a random variable and a sample space? And while we're at it, which of Marks and Dembski’s papers have you read? Did you read the article referenced in the OP? Why do you avoid most of my questions, even when they're yes/no? Do you not understand them, not know the answer, not like the answer, not want to take the time to type "yes" or "no", or is it some other reason? R0bb
As for understanding, well it is clear that you do not understand what Dembski and marks wrote. And there is no way any thread can be productive with you and your strawman as the focal point. So I wiill leave you to wallow in your strawman.
I didn’t say anything about “spaces that do NOT contain the target.” I said “impossible outcomes”, meaning zero-probability outcomes, which has nothing to do with whether the target contains them or not.
Any space without the target = an impossible outcome, duh. Joe
R0bb, Your random walk search in no way exemplifies anything Dembski and Marks were talking about. Yours is a strawman, period, end of story- and you deserve to be ignored. Joe
Joe:
In Dembski’s omega EVERY space can be had with one move. Not so with yours.
By definition, every "space" in Ω "can be had with one move" in the baseline search. When you talk about more than one "move", you're thinking of the higher-level search followed by the alternate search. Neither of those is the baseline search. The baseline search is defined as a flat distribution over Ω. That's Dembski and Marks' definition, which I've been using consistently. I've never said or implied differently. You called me a liar over this issue in #170. Please show me that I'm lying when I say that every "space" in Ω "can be had with one move" in the baseline search. Where have I ever said any different? Quote me, please.
You later changed your position, saying that ? could contain impossible outcomes.
Yes omega contains spaces that do NOT contain the target.
I didn't say anything about "spaces that do NOT contain the target." I said "impossible outcomes", meaning zero-probability outcomes, which has nothing to do with whether the target contains them or not. I'll ask again: - Is there anything I’ve said in this discussion that you don’t understand? - Which of Marks and Dembski's papers have you read? Did you read the article referenced in the OP? These are a few of the literally dozens of questions that you have ignored, causing this discussion to go in circles. There's no way that this thread can be productive if you aren't willing to say what you do and do not understand. R0bb
R0bb:
What exactly did they write that shows that my ? is wrong?
The same thing I have been telling you. Nothing has changed. In Dembski's omega EVERY space can be had with one move. Not so with yours.
You later changed your position, saying that ? could contain impossible outcomes.
Yes omega contains spaces that do NOT contain the target. However each and every space is available to be searched. Again, not so with your example. And no, Dembski won't chime in to correct your nonsense. Joe
Joe:
Your omega is wrong- that is according to what Dembski and marks wrote.
What exactly did they write that shows that my Ω is wrong? Can you please provide a quote? You started this conversation by disputing my definition of Ω, saying that it should be 2 instead of 5 since there are only 2 possibilities. You later changed your position, saying that Ω could contain impossible outcomes. This is after telling me that I "smooched the pooch" because I included impossible outcomes in Ω. At that point you said that Ω isn't even relevant. You even said, "That you keep going back to omega tells me you have deceptive intentions." So after disputing my definition of Ω, you criticized me for staying on the subject of the dispute that you brought up. Now you're disputing Ω again.
Do you call it a “search” when you roll a die a single time?
Only if it is part of a search. If I am just rolling a die then it isn’t a search.
This may explain why we can't agree. In Dembski's LCI model, a search is a random variable. That means that something as simple as a roll of a die is a search. Perhaps you haven't read Dembski's work (including the article that the OP is about), and you think that he uses the term search in its conventional sense. You have to actually read his papers to see what he means. So I'll ask you point blank: Which of his papers have you read? Did you read the article referenced in the OP? This argument is unnecessary in the sense that the point I made in the random walk example has already been acknowledged by a member of the Evo Info Lab, namely Atom. He sees the LCI as having a tacit condition, even though Dembski and Marks have never said so, and have consistently portrayed the LCI as a universal law. The only question is whether Dembski will acknowledge that the LCI doesn't always work. It's not a controversial issue -- it's a trivial, mathematical fact, as already acknowledged by Atom. So I appreciate you keeping this thread alive, on the very slim chance that Dembski or another member of the Evo Info Lab may see fit to chime in. We have to all get on the same page with regards to basic mathematical facts before we can talk about applying the LCI to ID.
Good luck with that- ya see there isn’t anything else to say about your examples until they appear in a peer-reviewed journal.
So you accept Dembski's LCI even though it's not peer-reviewed, but my counterexample to his LCI needs to be peer-reviewed? R0bb
BTW R0bb, I never assumed that you understood the topic as it has been clear to me that you do not. But again you can try to have your stuff published and prove me wrong. Joe
R0bb, Your omega is wrong- that is according to what Dembski and marks wrote. The target that is in omega can be had in ONE move. Not so with your example. As for a search for a search, again in your example we can find the target in the first move, meaning there isn't any search for a search. In DM's example they were searching for two different things. The first search was to help them in the second search. Not so with yours.
Do you call it a “search” when you roll a die a single time?
Only if it is part of a search. If I am just rolling a die then it isn't a search.
When you get a fortuitous roll in a board game, do you say, “Wow, that was a great search”?
No, I say "Yes!" But anyway I explained why your example is incorrect. However if you choose you can try to have it published. Good luck with that- ya see there isn't anything else to say about your examples until they appear in a peer-reviewed journal. But I doubt the ever will make it. Joe
Joe, with you citing comments 24-27, we've come full circle, and we've had little, if any, progress. It doesn't have to be this way. The LCI is a mathematical claim, and I'm making the mathematical counterclaim that the LCI doesn't always hold. Unlike some other issues in the ID debate, this one is straightforward and well-defined, and should not be controversial at all. We really can resolve our disagreements, but it will take some work, and will require more transparency in our communication. Specifically, I've assumed throughout that you have a good understanding of the topic we're discussing, and that if you don't understand something I say, you'll tell me. Have I assumed correctly? Is there anything I've said that you don't understand? I'll continue when you answer that question. R0bb
Comments 24-27
That's back in the beginning when you were arguing that |Ω| should be 2 instead of 5, since there are only 2 possible outcomes for a given initial state. Then you reversed your position, saying that "Ω can contain zero probability sections". Are you back to your original position again?
and 164
Do you understand that everything in Ω is equiprobably accessible by a single realization of the baseline search, by definition?
Then there is the fact that in your example the target can be had on the initial drop-in, which makes the next level search moot. IOW it isn’t a search for a search at all.
If you're using the term "search" in the conventional way, then most of Dembski's examples aren't searches. Do you call it a "search" when you roll a die a single time? When you get a fortuitous roll in a board game, do you say, "Wow, that was a great search"? In Dembski's framework, a "search" is a random variable. If you can find anywhere that he restricts the definition further, please point it out to me. A "search for a search", then, is a chain of two random variables. Again, if you can find a more restrictive definition in Dembski's work, please show me. My example is a chain of two random variables, and therefore a "search for a search" according to Dembski's usage. If it bothers you that the higher-level search space and the lower-level search space contain some of the same states, then we can easily relabel the states so they're not the same, and the analysis won't be affected at all. It won't be a two-dimensional random walk any more, but it will still be a chain of two random variables and a counterexample to the LCI. R0bb
Comments 24-27 and 164, for starters. But again you have already read and choked on those, so there is no way you will change your position just by rereading them. Then there is the fact that in your example the target can be had on the initial drop-in, which makes the next level search moot. IOW it isn't a search for a search at all. Joe
Joe, can you at least tell me the number of the comment in which you told me what the misrepresentation is? I honestly don't know what you're referring to, and if I've misrepresented Dembski in any way, I want to correct it. R0bb
R0bb, I have already answered that question/ told you what the misrepresentation is. Just because you didn't like the answer or cannot follow along doesn't mean it wasn't answered. Joe
Your random walk is a strawman because it misrepresents Dembski’s position.
You know, of course, that I'm going to ask the obvious question: How does the random walk example misrepresent Dembski's position. It would help if you would tell us what the alleged misrepresentation is. For example, "Dembski says ____________ but the random walk example seems to assume that Dembski says _____________." R0bb
As for refuting Darwinian evolution, well there still isn't anything to refute because "that which can be asserted without evidence can be dismissed without evidence" (hitchens). Joe
R0bb, Your random walk is a strawman because it misrepresents Dembski's position. Joe
But you can take your strawman and hump it, you seem to be good at that.
Thank you for the kind words, Joe. I love you too. A strawman is a misrepresentation of an opponent's position. Can you quote something from me that misrepresents Dembski's position? WRT your quote from Dembski in #177, that's not the same LCI that he's talking about in the OP. His old LCI says that a "necessity+chance" process cannot increase the net CSI by more than 500 bits. His new LCI says that the active information in a lower-level search cannot exceed the endogenous information in the higher-level search. R0bb
Thanks, onlooker. You're right about the NFL theorems, which are as problematic as the LCI for ID proponents who try to apply them to nature. Nature, as we observe it, comes nowhere near satisfying the NFL or LCI conditions. In order to construct an ID argument from NFL or the LCI, we have to assume that nature itself arose from a "search" that does satisfy the NFL or LCI conditions. That's where Dembski's appeal to the Principle of Insufficient Reason comes in. Dembski realizes that the LCI can't refute hypotheses regarding more immediate causes, like Darwinian evolution. He says:
Nature is a matrix for expressing already existent information. But the ultimate source of that information resides in an intelligence not reducible to nature. The Law of Conservation of Information, which we explain and justify in this paper, demonstrates that this is the case. Though not denying Darwinian evolution or even limiting its role as an immediate efficient cause in the history of life, this law shows that Darwinian evolution is deeply teleological.
R0bb
And MathGrrl exposes its ignorance:
There we have CSI (and intelligent design creationism) in a nutshell — we didn’t see it happening, therefore Jesus.
1- As opposed Patrick's position which sez "we didn't see it happening therefor nature didit/ it just happened." 2- intelligent design creationism exists only in the closed minds of the wilfully ignorant. And here is Patrick. 3- Science Patrick- Ya see cause and effect relationships, in accordance with uniformitarianism, tell us that agency and only agency can account for the presence of CSI- SCIENCE, Patrick. As opposed to your posiotion which can only say "anything but ID no matter what!" 4- Nothing about Jesus in anything I posted. IOW Patrick requires falsehoods in order to score brown-nose points with his anti-science buddies Joe
Natural causes are therefore incapable of generating CSI. This broad conclusion I call the Law of Conservation of Information, or LCI for short. LCI has profound implications for science. Among its corollaries are the following: (1) The CSI in a closed system of natural causes remains constant or decreases. (2) CSI cannot be generated spontaneously, originate endogenously, or organize itself (as these terms are used in origins-of-life research). (3) The CSI in a closed system of natural causes either has been in the system eternally or was at some point added exogenously (implying that the system though now closed was not always closed). (4) In particular, any closed system of natural causes that is also of finite duration received whatever CSI it contains before it became a closed system.- wm dembski
So this tells us that when we observe CSI in the real-world and did not observe it arising, we can safely infer some agency was present to make it so. Joe
onlooker:
The most significant in terms of real world applicability is that the NFL theorems apply across all possible fitness landscapes, whereas evolution only has to work in the one we know about (aka reality).
What "evolution" atre you talking about? Blind watchmaker evolution, ie the modern synthesis, doesn't seem to work at all. IOW your equivocation is duly noted as is the total lack of evidnce for your position. Joe
R0bb, You random walk example is a strawman. Period, end of story. So no, I don't have anything else to discuss with you on this topic. But you can take your strawman and hump it, you seem to be good at that. Joe
R0bb,
scordova, Chance Ratliff, Dieb, onlooker, and anyone else who might read this comment: Do any of you agree that the following is how the LCI applies to the real world?
As for how the LCI applies in to the real world- It tells us that having directions or a recipe is an easier way to have a successful search than to just do stuff until you get what you want. If you have directions they narrow your search grid, ie they provide active information. The same with a recipe.
I'm responding to this just to let you know that at least one onlooker is still following this discussion. I'm enjoying your explanations. As far as your question goes, no, I don't think that what you quoted is how LCI applies to the real world because I don't think the LCI applies to the real world at all. The "Law of Conservation of Information" is not a law and describes nothing that is conserved. Mark Chu-Carroll has reviewed Dembski's paper and points out a number of serious flaws. The most significant in terms of real world applicability is that the NFL theorems apply across all possible fitness landscapes, whereas evolution only has to work in the one we know about (aka reality). Dembski might be able to construct something like the anthropological argument (privileged universe) from the NFL theorems, but he certainly can't use them to claim that evolution can't happen. onlooker
Joe, if we can't agree on something as fundamental as the fact that zero-probability outcomes cannot be realized, then I'm afraid we're at an impasse. And I'm not trying to exemplify any of the Dembski/Marks examples. I'm applying the LCI to a different example, namely a two-dimensional random walk, to show that the LCI doesn't always hold. With regards to the random walk model, the equiprobability of the three initial states is an assumption that we're required to make in order to apply the LCI to the model. It's part of the LCI analysis, not necessarily the model itself. But if the model as described in the TSZ post isn't entirely clear, we can describe it in terms of a transition matrix instead. (I'll use a right-stochastic matrix for convenience). According to the LCI's required assumption, the initial probability vector is: 1/3 1/3 1/3 According the definition of a two-dimensional random walk, the transition matrix is: 1/2  0  1/2  0   0  0  1/2  0  1/2  0  0   0  1/2  0  1/2 The resulting probability vector is their product: 1/6 1/6 1/3 1/6 1/6 This is the beginning of a binomial distribution, which is exactly what we expect for a two-dimensional random walk. A binomial distribution is biased toward the center position, which is why the random walk violates the LCI. When Marks and Dembski apply the LCI to examples, the examples are always chosen such that the uniformity of the initial probability vector is preserved in the resulting probability vector. This means that all columns in the transition matrix must have the same sum. Their definition of the LCI does not specify such a requirement, and not all non-contrived models have such a transition matrix, the random walk being a case in point. And as I've already pointed one, Marks and Dembski's example in the S4S paper, to which they do not attempt to apply the LCI, does not have such a transition matrix, and in fact violates the LCI. R0bb
Wrong again as Dembski s4s search never says any squares are inaccessible. As I said before you are still free to choose a known loser square. But that would defeat the purpose of active information, but you could still do it.
Actually, according to the mathematical model of the process, you cannot.
No R0bb, they just don't expect anyone to, but they did not account for evolutionists. Bottom line R0bb, your random walk example does not exemplify any of the Dembski/ Marks examples.
So in my random walk example, 3 states are accessible via 1 move, and all 5 states are accessible via 2 moves.
Just because you are convinced that does not make it so. Ya see R0bb, you would have to rewrite your example- a complete rewrite. You would need to add a "drop-in" stage- a stage in which any of the initial 3 states can be reached. THEN you would have to change that to a shift to the right or left. But after that drop-in stage only two possible choices remain, not 5. Joe
Joe:
Baseline search: – Dembski’s dice example: All outcomes in ? accessible in a single “move”. – S4S “squares” example: All outcomes in ? accessible in a single “move”. – My random walk example: All outcomes in ? accessible in a single “move”.
That is false as in your example not all outcomes in Ω are accesible in a single move. In your example the starting point determines which two soaces are accessible in a single move.
The "starting point" in the random walk, i.e. the initial state, is an alternate search, not the baseline search. For a given alternate search, only two outcomes are accessible. But the baseline search is defined such that all outcomes in Ω are equally probable. To say that not all outcomes in Ω are accessible to the baseline search is to deny Dembski and Marks' definition of "baseline search". If you don't approve of their definition, you're free to take it up with them.
Wrong again as Dembski s4s search never says any squares are inaccessible. As I said before you are still free to choose a known loser square. But that would defeat the purpose of active information, but you could still do it.
Actually, according to the mathematical model of the process, you cannot. If a scenario in which a person cannot choose one of the 3 zero-probability squares seems unrealistic to you, then the mathematical model simply doesn't apply to free agents. The S4S example is a stochastic process, and the active information is a random variable. If the stochastic process in question is a person, we have to be careful not to conflate epistemic information with active information. Epistemic information can be ignored or poorly utilized -- we can choose to dig for treasure in a place other than the map tells us. Active information, on the other hand, dictates the probability of each respective outcome being "chosen". If it confers zero probability on a square, then it is impossible for that square to be "chosen". To say otherwise is to say, absurdly, that the zero-probability square doesn't have zero probability.
Wrong again. Ω gets switched to the machine and then to the six.
I'm using Ω in the same sense that Dembski and Marks use the symbol -- it's the sample space of the lower-level search. Are you using it in a different sense? Dembski and Marks use a different symbol for the sample space of the higher-level search. For example, in the S4S paper, they call it M(Ω).
My random walk example: 3 outcomes are accessible in a single “move”, since the high level search returns 1 of 3 states ? ?.
That all depends on where you start, R0bb. And you never specified that. Ya see if you start on one of the 3 initial states then only 2 are accessible with one move and after that only two others are accessible.
All outcomes are accessible via 2 “moves”.
Nope. Only two other outcomes are accessible.
In trying to clarify what you mean by "move", I said before: "By 'move', I assume you’re referring to a realization of a 'search' (which in Dembski’s framework is really just a random variable)." Since you didn't respond, this is still my assumption. The first "move" is the realization of one of the 3 starting states. According to Dembski and Marks' framework, the probability distribution over these 3 states is uniform. The second "move" is the realization of the one of the two states that are accessible to the initial state. So in my random walk example, 3 states are accessible via 1 move, and all 5 states are accessible via 2 moves. R0bb
R0bb:
Baseline search: - Dembski’s dice example: All outcomes in ? accessible in a single “move”. - S4S “squares” example: All outcomes in ? accessible in a single “move”. - My random walk example: All outcomes in ? accessible in a single “move”.
That is false as in your example not all outcomes in Ω are accesible in a single move. In your example the starting point determines which two soaces are accessible in a single move. IOW, R0bb, you are a liar.
Alternate search: - Dembski’s dice example: All outcomes in ? are accessible in a single “move”. - S4S “squares” example: Some outcomes in ? are inaccessible in a single “move”. - My random walk example: Some outcomes in ? are inaccessible in a single “move”.
Wrong again as Dembski s4s search never says any squares are inaccessible. As I said before you are still free to choose a known loser square. But that would defeat the purpose of active information, but you could still do it.
- Dembski’s dice example: No outcomes in ? are accessible in a single “move”, since the high-level search returns a machine, not a number. All outcomes are accessible via 2 “moves”.
Wrong again. Ω gets switched to the machine and then to the six.
- My random walk example: 3 outcomes are accessible in a single “move”, since the high level search returns 1 of 3 states ? ?.
That all depends on where you start, R0bb. And you never specified that. Ya see if you start on one of the 3 initial states then only 2 are accessible with one move and after that only two others are accessible.
All outcomes are accessible via 2 “moves”.
Nope. Only two other outcomes are accessible. Joe
Joe:
The comment you chose to ignore for obvious reasons.
I'll naively assume that when you say "for obvious reasons", you're referring to the timestamp on my comment #165, which shows that I submitted it late at night (in the US), as is the case with the comment you're reading. Sometimes I go to sleep rather than immediately respond to further comments, lazy person that I am. Believe me, I know how important it is to be responsive. You, of course, know this also, which is why I know that when you get time, you'll go back and answer the dozens of questions that you have skipped over. When you do, a lot of things will become much more clear. I'll find out what you mean when you say "the equation", whether you realize that you have changed your position on your original objection to my random walk example, where exactly I mangled Dembski's words regarding individuation of outcomes, whether the LCI is a true law if it fails in mathematically valid cases, how Dembski's three CoI theorems apply to the real world, whether you've read Dembski's CoI proofs, whether you're interested in having a civil discussion, etc. R0bb
Joe:
Ω has to be defined as something you can get to with one move. For a fair six-sided die Ω is 1-6. The example we have been discussing has Ω = 16, each with an equal probability of being searched with one move. In R0bb’s random walk example you have 2 possible positions to be in after one move, yet he sets his Ω at 5, which messes up the equation (GIGO) and sez “I have disproven the LCI”.
Again, what "equation" are you talking about? By "move", I assume you're referring to a realization of a "search" (which in Dembski's framework is really just a random variable). But are you talking about the baseline search, the alternate search, or the two-tier combination of the S4S + alternate search? If you're talking about the baseline search, remember that the definition of the baseline search is based on the Principle of Indifference. The baseline search is defined as an equiprobable distribution over Ω. Therefore, every outcome in Ω is accessible by a single realization of the baseline search. This is always true, by definition. Here's a comparison of what is accessible in 1 or 2 "moves" in the three examples we've discussed: Baseline search: - Dembski's dice example: All outcomes in Ω accessible in a single "move". - S4S "squares" example: All outcomes in Ω accessible in a single "move". - My random walk example: All outcomes in Ω accessible in a single "move". Alternate search: - Dembski's dice example: All outcomes in Ω are accessible in a single "move". - S4S "squares" example: Some outcomes in Ω are inaccessible in a single "move". - My random walk example: Some outcomes in Ω are inaccessible in a single "move". Two-tier search: - Dembski's dice example: No outcomes in Ω are accessible in a single "move", since the high-level search returns a machine, not a number. All outcomes are accessible via 2 "moves". - S4S "squares" example: No outcomes in Ω are accessible in a single "move", since the high-level search returns a distribution over a subset of Ω, rather than an individual square. All but 3 outcomes are accessible via 2 "moves". - My random walk example: 3 outcomes are accessible in a single "move", since the high level search returns 1 of 3 states ∈ Ω. All outcomes are accessible via 2 "moves". Is there anything in the above comparison that you disagree with? If not, can you point to exactly where some outcomes are inaccessible in "one move" in my random walk, but accessible in "one move" in Dembski's examples? R0bb
Joe:
Again it is the search space, not omega, that is relevant. That is what I am saying.
Yes, that's what you're saying now. But at the beginning of our conversation, it was my definitions of Ω that you were objecting to. Do you see how your more recent statements regarding Ω contradict your earlier objections? Have you retracted those objections? The amount of active information in a search and whether a two-tier search scenario violates the LCI hinge on |Ω|. These are the issues we're discussing, so how can Ω not be relevant?
How do you know the map you have is correct? Or the recipe is what you want?
In my experience, maps and recipes usually have titles, and publishers like Rand McNally and Betty Crocker are known to make maps and recipes that are accurate and match their titles. Is your experience different from this? R0bb
R0bb:
Ω can be defined to include 5 outcomes or 5 zillion outcomes.
If and only if each of the 5 zillion outcomes can be had in one move. And in YOUR random walk example that is not true therefor your Ω is improperly defined as explained in comment 164. The comment you chose to ignore for obvious reasons. You lose. Joe
Joe:
Nice spin.
If I didn't accurately reflect what you were saying, then by all means, give us a more accurate summary of your examples.
Again it is the search space, not omega, that is relevant.
Our dispute is over the definitions of Ω in my examples, so how could Ω not be relevant?
That you keep going back to omega tells me you have deceptive intentions.
Your objections to my examples were in regards to my definitions of Ω. In the first example I said |Ω| = 5, and you said it should be 2, because there are only 2 possible outcomes from a given initial state. You have since reversed that position. I'm now trying to get an acknowledgement from you that your objection isn't valid. This is the opposite of deception -- I'm trying to get everything out in the open and stated plainly.
Because, in reality, your example’s omega would be much greater than 5 because there are more zero-probability outcomes you haven’t included.
Yes, exactly! Ω can be defined to include 5 outcomes or 5 zillion outcomes. If you're using standard statistical measures or stochastic process tools, it doesn't matter, other than a question of convenience. But if you're using Dembski's framework, it does matter. The amount of active information depends on how you choose to model Ω. So given a real-world process X and an accurate and mathematically valid model M, how do I determine whether M is a "proper" model? It's easy to object to models on an ad hoc basis, calling them "muddled" or "improper", whatever that means. But how do you generalize your rules of modeling, such that all of Dembski's examples and CoI theorems are "proper"? I'm all ears. R0bb
Ω has to be defined as something you can get to with one move. For a fair six-sided die Ω is 1-6. The example we have been discussing has Ω = 16, each with an equal probability of being searched with one move. In R0bb's random walk example you have 2 possible positions to be in after one move, yet he sets his Ω at 5, which messes up the equation (GIGO) and sez "I have disproven the LCI". Joe
R0bb:
It’s certainly true that a search has greater probability of success if we have something that increases its probability of success.
How do you know the map you have is correct? Or the recipe is what you want? Joe
R0bb:
It’s certainly true that a search has greater probability of success if we have something that increases its probability of success.
Nice spin. R0bb:
I explicitly said “from ?” and “in ?”.
Again it is the search space, not omega, that is relevant. That is what I am saying. That you keep going back to omega tells me you have deceptive intentions. Because, in reality, your example's omega would be much greater than 5 because there are more zero-probability outcomes you haven't included. Also with Dembski's example you can get to ANY square at the first move- even the zero-probability squares, if you so choose to pick a known loser. In your random walk example that is not so as you are very limited in the moves you can make. Joe
scordova:
If you say that |T|/|Ω| = 1/2 because there are only two states, that's not quite correct because if they are not equiprobable states, you have to modify the way you do the accounting, the fact that P(T) is 5/6 is an indication that they are not equiproable states, and hence the ratio |T|/|Ω| needs to account for this.
|T|/|Ω| is the probability of success of the baseline search. In the baseline search, outcomes are always equiprobable, by definition. So the baseline search for my model has this probability distribution: P("1") = 1/2 P("higher than 1") = 1/2 If that strikes you as problematic, then we're on the same page. That's how Dembski and Marks' framework is defined, and it's a problem. You could argue that Ω shouldn't be defined as {"1", "higher than 1"}, but there is nothing mathematically wrong with defining it that way. Dembski could add a caveat to the definition of "active information" saying that Ω must be properly defined, but then how do we define "properly defined"? R0bb
Joe:
Ω can contain zero probability sections and those zero probability sections sections are never considered in the equation.
What equation are you talking about? Active information is a function of Ω, and therefore the amount of active information changes depending on whether we include zero-probability outcomes in Ω. By acknowledging that Ω can contain zero-probability outcomes, you've reversed your previous position. You seem to have forgotten that your objection to my random walk example was over the definition of Ω. You said that Ω is 2 instead of 5, since there are only 2 possibilities for a given initial state. Now you're saying that outcomes needn't be possible in order to be included in Ω. Now that you've nullified your objection, is our disagreement settled? If you were to read Dembski and Marks' papers, you would realize that your objection never made sense in the first place. If we were to define Ω such that it includes only the outcomes accessible to a given state, we would have to have a different Ω for each initial state. That's not how Dembski and Marks' framework works, as you'll see for yourself if you read their work. R0bb
Joe:
So no, we were not talking about their inclusion in the definition of Ω, we were talking about their inclusion in the search space. At least I was and I made that very clear.
The subconversation about including numbers 7+ started with me saying, "Consistently excluding zero-probability outcomes from Ω would yield bizarre results." (Emphasis added.) You answered, "So with the dice example I quoted above does that mean we should also include numbers 7 - infinity?" Are you now saying that you were responding to my point about excluding outcomes from Ω with a question about including outcomes in something other than Ω? And without any indication that you were changing the subject? How does that make sense? Here is the whole thread, with the references to Ω bolded:
R0bb:
Consistently excluding zero-probability outcomes from ? would yield bizarre results.
So with the dice example I quoted above does that mean we should also include numbers 7 - infinity? And of course a coin toss would then have more than two outcomes- in zero G. Wow R0bb, thanks. That clears up my misunderstanding. If we just do whatever we want we can violate the LCI. I bet you ran around the table with your hands in the air, cheering for yourself, once you figured that out. Thumbs high, big guy
Consistently excluding zero-probability outcomes from Ω would yield bizarre results.
So with the dice example I quoted above does that mean we should also include numbers 7 - infinity?
No, it means that if we always exclude zero-probability outcomes from Ω, then in some cases active information will decrease when a search improves. Do you agree? As for including numbers 7 - infinity (by which I assume you mean all integers greater than 6, none of which is actually infinity), is there any reason not to do so, other than inconvenience?
#44 As for including numbers 7 - infinity (by which I assume you mean all integers greater than 6, none of which is actually infinity), is there any reason not to do so, other than inconvenience?
Yes, if you include them then you could never figure out the odds of rolling a "6" with a fair die. As I said you appear not to know anything about the topic that you are trying to discuss. But seeing that you are anonymous you don't care that you look foolish.
As for including numbers 7 - infinity (by which I assume you mean all integers greater than 6, none of which is actually infinity), is there any reason not to do so, other than inconvenience?
Yes, if you include them then you could never figure out the odds of rolling a "6" with a fair die
The probability of rolling a 6 would still be 1/6, and every number higher than 6 would have a probability of zero. The mean, median, variance, etc. are unaffected by inclusion of the higher numbers in Ω. Can you explain what the problem is?
R0bb:
The probability of rolling a 6 would still be 1/6, and every number higher than 6 would have a probability of zero.
Then the numbers higher than 6 are NOT included.
Joe:
Then the numbers higher than 6 are NOT included.
We're talking about inclusion in the definition of Ω. Are you under the impression that if an outcome has zero probability, then it's automatically excluded from the definition of the sample space? I thought we were now in agreement that Ω can contain zero-probability outcomes, since it does so in the example from the S4S paper. If not, there are some questions that I've already asked that will settle this if you'll attempt to answer them. Will you?
R0bb, You keep equating Ω with the searchable space. The two are not the same. Ω can contain zero probability sections and those zero probability sections sections are never considered in the equation. So no, we were not talking about their inclusion in the definition of Ω, we were talking about their inclusion in the search space. At least I was and I made that very clear.
I explicitly said "from Ω" and "in Ω". Where in the thread regarding the inclusion of 7+ did you indicate that you were talking about something other than Ω? R0bb
Joe:
scordova, Chance Ratliff, Dieb, onlooker, and anyone else who might read this comment: Do any of you agree that the following is how the LCI applies to the real world?
R0bb if you doubt it why don't you just make your case?
Because I think it would help if you saw that others disagree with you. But I doubt that anyone is reading our conversation. As for your understanding of what the LCI tells us:
As for how the LCI applies in to the real world- It tells us that having directions or a recipe is an easier way to have a successful search than to just do stuff until you get what you want. If you have directions they narrow your search grid, ie they provide active information. The same with a recipe.
It's certainly true that a search has greater probability of success if we have something that increases its probability of success. But do you really think that we need the LCI to tell us this tautological fact? The LCI doesn't come into play unless we factor in the "cost" of finding whatever it is that increases the original search's probability of success, in which case the probability of finding the original target goes down (or stays the same), not up, according to LCI. R0bb
Joe:
I'll ask you a few of the questions that Joe hasn't answered: - If I define ? for a die as {even numbers, odd numbers}, is that improperly defined?
I answered that one- yes it is improperly defined as 8 is an even number but 8 isn't in the search space.
Sorry, I had forgotten that you had already answered this. And my intent was {even numbers on a die, odd numbers on a die}, but I wasn't explicit about that. Regardless, when you say that "8 isn't in the search space", I assume you mean that there is no way to get an 8 from rolling a die. But you have already acknowledged that zero-probability outcomes can be included in Ω, so why can't all numbers be included?
- If I define Ω for poker hands as {royal flush, everything else}, is that improperly defined?
Yes
And yet Dembski has used that exact sample space in at least three of his works. R0bb
Joe:
I'll ask you a few of the questions that Joe hasn't answered: - If I define ? for a die as {even numbers, odd numbers}, is that improperly defined?
I answered that one- yes it is improperly defined as 8 is an even number but 8 isn't in the search space.
Sorry, I had forgotten that you had already answered this. And my intent was {even numbers on a die, odd numbers on a die}, but I wasn't explicit about that. Regardless, when you say that "8 isn't in the search space", I assume you mean that there is no way to get an 8 from rolling a die. But you have already acknowledged that zero-probability outcomes can be included in Ω, so why can't all numbers be included?
- If I define Ω for poker hands as {royal flush, everything else}, is that improperly defined?
Yes
And yet Dembski has used that exact sample space in at least three of his works. R0bb
R0bb:
- If I define ? for a die as {even numbers, odd numbers}, is that improperly defined?
No. that's a 50/50 probability distribution.
- If I define ? for poker hands as {royal flush, everything else}, is that improperly defined?
No. Don't ask me for the distribution, though I could probably figure it out given time, lol. I think you highlight an area where Sal needs additional instruction/contemplation. He sometimes confuses {1/6, 1/6, 1/6, 1/6, 1/6, 1/6} with {1/2, 1/2} Mung
R0bb from #30:
The problem is that Dembski and Marks don’t define Ω this way. In most of their examples, active information is the result of restricting the alternate search to a subset of Ω.
Exactly! The restricted subset does not include the zero-probability choices.
If they were to follow your reasoning, they would define ? to include only the outcomes that are accessible to the alternate search
They redefine the search space within Ω.
, and the resulting active information would be zero. So by your reasoning, their examples of active information don’t really have any active information.
LoL! The restricted subset is the active information, R0bb. THAT is the whole point behind the restriction- the search space is now limited therefor making the search easier. Joe
R0bb:
scordova, Chance Ratliff, Dieb, onlooker, and anyone else who might read this comment: Do any of you agree that the following is how the LCI applies to the real world?
R0bb if you doubt it why don't you just make your case? Joe
R0bb:
I’ll ask you a few of the questions that Joe hasn’t answered: - If I define ? for a die as {even numbers, odd numbers}, is that improperly defined?
I answered that one- yes it is improperly defined as 8 is an even number but 8 isn't in the search space.
- If I define ? for poker hands as {royal flush, everything else}, is that improperly defined?
Yes Joe
Robb, You asked.
The second model says that the outcome “1? occurs with a probability of 1/6, and that the outcome “higher than 1? occurs with a probability of 5/6. How does that not conform to a fair die? How does it deviate from the actual statistics of what is being modeled?
I should have been more explicit. I was referring to this claim:
But we could, instead, define ? as {1, higher than 1}. In that case, I+ = log2((5/6) / (1/2)) = .7 bits.
Redefining ? in this way is not faithful to the statistics of a fair die. You gave no justification for leaving p(T) at 5/6 while letting |T|/|?| = 1/2. If you say that |T|/|?| = 1/2 because there are only two states, that's not quite correct because if they are not equiprobable states, you have to modify the way you do the accounting, the fact that P(T) is 5/6 is an indication that they are not equiproable states, and hence the ratio |T|/|?| needs to account for this. As I pointed out that ratio has to be deduced with some qualification. If the states are not equiprobable, that needs to be accounted for. A charitable reading would have perceived this. But anyway, these are the sort of discussions that need to be passed on to Bill, Mark, and the evolution informatics lab. I hope they will address your points. Sal scordova
Since q is not a function of |?|, obviously we don’t expect to see the number 16 in its calculation. But I_+, the active information, is a function of ?, and we do see the number 16 (log-scaled) in its calculation:
R0bb, that is because that calculation is for the difference between the two figures. That 16 refers to figure 1. And that just tells you the amount of information each square contains, ie 2^4. Joe
R0bb, You keep equating Ω with the searchable space. The two are not the same. Ω can contain zero probability sections and those zero probability sections sections are never considered in the equation. So no, we were not talking about their inclusion in the definition of Ω, we were talking about their inclusion in the search space. At least I was and I made that very clear. Joe
scordova:
Thank you for your discussion, but I think this isn't the most charitable interpretation regarding the die:
I'm very glad to see you joining the discussion, scordova. I'm not sure what your point is regarding charitability. Are you saying that I'm interpreting Dembski uncharitably, or that I'm modeling the die uncharitably? WRT the way that you say it should be modeled, I think what you mean is the following: |{1}| = 1 |T|=|{2,3,4,5,6}| = 5 |Ω|=|{1}|+|{2,3,4,5,6}|=6 |T|/|Ω|=5/6 That, of course, is identical to the first model in the article. I submit that the second model, where the outcomes are individuated differently, is no less accurate than the first. You seem to dispute this:
Actually the different numbers arise because the model no longer conforms to a fair die. Then notion of the measure of information not being arbitrary, but conforming to the statistics of the probability, still holds. Deviation from the actual statistics of what is being modeled in the way that you have done is imposing an arbitrary (and incorrect) measure of information for the system being modeled.
The second model says that the outcome "1" occurs with a probability of 1/6, and that the outcome "higher than 1" occurs with a probability of 5/6. How does that not conform to a fair die? How does it deviate from the actual statistics of what is being modeled? I'll ask you a few of the questions that Joe hasn't answered: - If I define Ω for a die as {even numbers, odd numbers}, is that improperly defined? - If I define Ω for poker hands as {royal flush, everything else}, is that improperly defined? You say that the measure of information in the example is incorrect -- can you point to the error? You also say that it's arbitrary, which sounds like you're agreeing with the point of the article, since the information measure is Dembski's active information. Dembski used to make this same point often. As quoted in the article, he said that an information measure is ill-defined if it depends on how the possibilities are individuated. He later invented an information measure, namely active information, that depends on how the possibilities are individuated. R0bb
Joe:
Then the numbers higher than 6 are NOT included.
We're talking about inclusion in the definition of Ω. Are you under the impression that if an outcome has zero probability, then it's automatically excluded from the definition of the sample space? I thought we were now in agreement that Ω can contain zero-probability outcomes, since it does so in the example from the S4S paper. If not, there are some questions that I've already asked that will settle this if you'll attempt to answer them. Will you? R0bb
Joe:
Thus equation for figure 2 does not contain the number 16
The section regarding figure 2 contains calculations of q, I_S, and I_+. When you say "the equation for figure 2", I assume you're referring to the calculation of q. Is that right? Since q is not a function of |Ω|, obviously we don't expect to see the number 16 in its calculation. But I_+, the active information, is a function of Ω, and we do see the number 16 (log-scaled) in its calculation:
I_+ = 4.00-2.59 = 1.41 bits.
Without the negative log-scaling, this says that the active probability = endogenous probability / exogenous probability = (1/16) / (1/6). Note that it's 1/16, not 1/13, even though there are only 13 non-zero-probability choices. By exactly the same token, the active probability in my random walk example is (1/5) / (1/2), not (1/2) / (1/2), even though there are only 2 non-zero-probability choices. Are we in agreement now? R0bb
scordova, Chance Ratliff, Dieb, onlooker, and anyone else who might read this comment: Do any of you agree that the following is how the LCI applies to the real world?
As for how the LCI applies in to the real world- It tells us that having directions or a recipe is an easier way to have a successful search than to just do stuff until you get what you want. If you have directions they narrow your search grid, ie they provide active information. The same with a recipe.
R0bb
Sal, Please see Chance's post #42 above for how to post Ωs Joe
Sorry to the reader's my omegas are appearing as "?" after I made the previous post. Some unwanted transformation occurred after I hit the "Post Comment" button. scordova
R0b, Thank you for your discussion, but I think this isn't the most charitable interpretation regarding the die:
" What we’re modeling hasn’t changed, but we’ve gained active information by making a different modeling choice. "
The ratio |T|/|?| should be based on the space of outcomes adjusted for probability of those outcomes, it should not be based on merely counting the discrete states (unless the states are equiprobable). Hence if you define ? in only two states: State 1: 1 State 2: 2,3,4,5, or 6 The corresponding space of outcomes need to be adjusted such that |T|/|?| is 5/6, otherwise the model would not be faithful to the statistics of the object (a fair die) that is being modeled. That would be the charitable reading. You are of course free to complain that Bill could have said things otherwise to avoid such uncharitable readings. |{1}| = 1/6 |T|=|{2,3,4,5,6}| = 5/6 |?|=|{1}|+|{2,3,4,5,6}|=1 |T|/|?|=5/6
"What we’re modeling hasn’t changed, but we’ve gained active information by making a different modeling choice "
Actually the different numbers arise because the model no longer conforms to a fair die. Then notion of the measure of information not being arbitrary, but conforming to the statistics of the probability, still holds. Deviation from the actual statistics of what is being modeled in the way that you have done is imposing an arbitrary (and incorrect) measure of information for the system being modeled. scordova
R0bb:
Do you see now that the new search grid, as you call it, has 13 squares, not 16 as you said in #94?
The search grid consists of 13 squares however it is made of of two sets, one containing 12 and the otehr containing 4. 12 + 4 = 16. That 3 squares overlap is of no concern to what I said in #94. Ya see R0bb, well before 94 I exposed your nonsense. The nonsense that you still refuse to address. Joe
R0bb:
The probability of rolling a 6 would still be 1/6, and every number higher than 6 would have a probability of zero.
Then the numbers higher than 6 are NOT included. Joe
OK my apologies to R0bb and UD- ? does NOT change. The search grid within ? change. The original search grid included all 16 blocks. The new search grid, the one with active information, includes ONLY: • A is the uniform distribution over the rightmost four squares in the search space. • B is the uniform distribution over the bottom twelve squares in the search space.
Thanks, Joe. Do you see now that the new search grid, as you call it, has 13 squares, not 16 as you said in #94? R0bb
As for including numbers 7 – infinity (by which I assume you mean all integers greater than 6, none of which is actually infinity), is there any reason not to do so, other than inconvenience?
Yes, if you include them then you could never figure out the odds of rolling a “6? with a fair die
The probability of rolling a 6 would still be 1/6, and every number higher than 6 would have a probability of zero. The mean, median, variance, etc. are unaffected by inclusion of the higher numbers in Ω. Can you explain what the problem is? R0bb
As for how the LCI applies in to the real world- It tells us that having directions or a recipe is an easier way to have a successful search than to just do stuff until you get what you want. If you have directions they narrow your search grid, ie they provide active information. The same with a recipe. Joe
OK my apologies to R0bb and UD- Ω does NOT change. The search grid within Ω changes. The original search grid included all 16 blocks. The new search grid, the one with active information, includes ONLY: • A is the uniform distribution over the rightmost four squares in the search space. • B is the uniform distribution over the bottom twelve squares in the search space. Thus equation for figure 2 does not contain the number 16 And that means the new search grid excludes R0bb's 1.1, 1.2, 1.3. And that has been my point all along. Joe
Anyone can have a look: The Search for a Search: Measuring the Information Cost ofHigher Level Search- page 478 Joe
R0bb:
Because the example explicitly defines ? as all the squares in the grid, and it never says that ? changes.
Yes, it says Ω changes(from figure 1 to figure 2) and provides those changes. That you can't even be honest about that exposes your agenda. Joe
Joe:
How can that be given: • A is the uniform distribution over the rightmost four squares in the search space. • B is the uniform distribution over the bottom twelve squares in the search space. Explain yourself, I dare you…
Because the example explicitly defines Ω as all the squares in the grid, and it never says that Ω changes. The two sentences you quote say nothing about Ω. I've tried to make that clear throughout; I apologize if I failed to do so. Since I've answered that, I'll try once again with the following question, and I'll make it even more concise: Given: |A| = 4 |B| = 12 |A∩B| = 3 Ω = A∪B What is |Ω|? Two keystrokes is all it takes. You can answer it faster than you can tell me that you're not going to answer it. R0bb
Robb:
If we label the squares in the grid from the example as follows: 1.1 1.2 1.3 1.4 2.1 2.2 2.3 2.4 3.1 3.2 3.3 3.4 4.1 4.2 4.3 4.4 I claim that in the example we’re discussing, Ω = {1.1, 1.2, 1.3, 1.4, 2.1, 2.2, 2.3, 2.4, 3.1, 3.2, 3.3, 3.4, 4.1, 4.2, 4.3, 4.4}.
How can that be given: • A is the uniform distribution over the rightmost four squares in the search space. • B is the uniform distribution over the bottom twelve squares in the search space. Explain yourself, I dare you... Joe
R0bb, Why are you even asking me? Everything is defined in the paper you are referencing. And if you can't understand what is in the paper, as obvioulsy you do not, then you should not try to discuss it. So no, I have indulged you enough- no, more than enough. Joe
Joe, I know you're very busy, but the following question can be answered in only two keystrokes: Given the following: – Set A has 4 elements – Set B has 12 elements – 3 of the elements in A are also in B – Ω = A∪B What is |Ω|? Also, when I say "in set notation", I mean in a form like "Ω = {1.1, 1.2, 1.3, 1.4, 2.1, 2.2, 2.3, 2.4, 3.1, 3.2, 3.3, 3.4, 4.1, 4.2, 4.3, 4.4}". Can you tell me, in that form, what you're claiming Ω to be? (This will require more than a few keystrokes, but maybe you'll indulge me.) Likewise, the questions about Marks' egg recipe and Dembski's treasure map can be answered in only a couple of keystrokes, so maybe you can take a few seconds and answer them. Thanks in advance. R0bb
onlooker- R0bb's post on TSZ are a failure- a failure to comprehend what Dembski and Marks are saying which led to a failure to properly address what they said. And one reason that you don't see progress here is you and your ilk prevent it. This thread is a great case in point... Joe
Chance,
onlooker @104, my #97 illustrates what I see as the difference.
At least we've clearly articulated our differences on this small point. That's more progress than is often seen here! If you have the time and inclination, I am interested in your thoughts on the larger issue of the validity of Dembski's "Law", in particular R0b's posts at The Skeptical Zone. onlooker
#44 “As for including numbers 7 – infinity (by which I assume you mean all integers greater than 6, none of which is actually infinity), is there any reason not to do so, other than inconvenience?”
Yes, if you include them then you could never figure out the odds of rolling a "6" with a fair die. As I said you appear not to know anything about the topic that you are trying to discuss. But seeing that you are anonymous you don't care that you look foolish. Joe
Okay, in #120 I told you, in set notation, what I think the definition of ? is in that example. Can you tell me, in set notation, what you’re claiming the definition of ? is?
I told you- • A is the uniform distribution over the rightmost four squares in the search space. • B is the uniform distribution over the bottom twelve squares in the search space. That you refuse to understand that exposes your agenda and your lack of integrity. Joe
Marks offers an example of searching for an good boiled egg recipe, given 66 possibilities.
Blah, blah, blah- It has already been proven that your intent is just to purposely mess up whatever Marks or Dembski says. So why should anyone try to respond to you? Joe
Given the following: - Set A has 4 elements - Set B has 12 elements - 3 of the elements in A are also in B - ? = A?B What is |?|? If we label the squares in the grid from the example as follows: 1.1 1.2 1.3 1.4 2.1 2.2 2.3 2.4 3.1 3.2 3.3 3.4 4.1 4.2 4.3 4.4 I claim that in the example we’re discussing, ? = {1.1, 1.2, 1.3, 1.4, 2.1, 2.2, 2.3, 2.4, 3.1, 3.2, 3.3, 3.4, 4.1, 4.2, 4.3, 4.4}.
You can claim that all you want. It proves my point that you are wasting my time. What the ACTUAL example says:
• A is the uniform distribution over the rightmost four squares in the search space. • B is the uniform distribution over the bottom twelve squares in the search space.
R0bb's example doesn't look like the actual example. Joe
What equation are you talking about?
The equation that goes with the figure/ exampe. As I said you are wasting my time.
What if I defined ? for a die as {even numbers, odd numbers}. Is that improperly defined?
Yes, it is as 8 is an even number that is not on the die. And BTW R0bb, throwing te die an having to have it land on a certain number is a measure of the physical difficulty of doing so. Joe
Joe:
We always exclude zero-probabilty outcomes. That you think we don’t or shouldn’t tells me that you aren’t interested in having a serious discussion.
Marks offers an example of searching for an good boiled egg recipe, given 66 possibilities. Knowledge of chemistry reduces the number of possibilities to 44. I claim that Marks would say that there are about .6 bits of active information in this knowledge of chemistry. According to your position, how many bits of active information are in this knowledge of chemistry? Dembski often uses an example in which a treasure map reduces the number of possible burial sites to one. According to your position, how much active information is in the map?
We have been over this alrewady and you pointed to an example that excluded 3 blocks that had a zero probability as if they were included.
Okay, in #120 I told you, in set notation, what I think the definition of Ω is in that example. Can you tell me, in set notation, what you're claiming the definition of Ω is? R0bb
Joe:
Which sets were improperly defined, and what was improper about them?
They look as if a 3 year old defined them. Other than that, nice job for a 3 year old. For example:
But we could, instead, define ? as {1, higher than 1}
Yes, you could so define the set, if you want to be a jerk or if you are a 3 year old.
What if I defined Ω for a die as {even numbers, odd numbers}. Is that improperly defined? What if we were talking about poker hands and I defined Ω as {royal flush, everything else}. Is that improperly defined? R0bb
But 3 of the squares in A are also in B.
And you’re just realizing that now?
Of course not. What makes you think that?
Did you not read the equatuation and discussion or did you jusrt decide to post without understanding it?
What equation are you talking about? Given the following: - Set A has 4 elements - Set B has 12 elements - 3 of the elements in A are also in B - Ω = A∪B What is |Ω|? If we label the squares in the grid from the example as follows: 1.1 1.2 1.3 1.4 2.1 2.2 2.3 2.4 3.1 3.2 3.3 3.4 4.1 4.2 4.3 4.4 I claim that in the example we're discussing, Ω = {1.1, 1.2, 1.3, 1.4, 2.1, 2.2, 2.3, 2.4, 3.1, 3.2, 3.3, 3.4, 4.1, 4.2, 4.3, 4.4}. According to you, what is Ω? Can you answer the questions in bold above? R0bb
Does evolutionism apply to the real world? If yes, in what way? Joe
R0bb- We always exclude zero-probabilty outcomes. That you think we don't or shouldn't tells me that you aren't interested in having a serious discussion. We have been over this alrewady and you pointed to an example that excluded 3 blocks that had a zero probability as if they were included. Joe
R0bb:
Which sets were improperly defined, and what was improper about them?
They look as if a 3 year old defined them. Other than that, nice job for a 3 year old. For example:
But we could, instead, define Ω as {1, higher than 1}
Yes, you could so define the set, if you want to be a jerk or if you are a 3 year old. You choose... Joe
R0bb:
But 3 of the squares in A are also in B.
And you're just realizing that now? Did you not read the equatuation and discussion or did you jusrt decide to post without understanding it?
As has been pointed out by Dieb and others, the probability is the same whether you search for the 6 by rolling the die directly or search for the 6 indirectly via the machines. Are you under the impression that when Dembski says “difficulty”, he’s referring to something more than just improbability?
The probability includes the probability of locating te machines, ie the difficulty.
But I didn’t say “without any prior knowledge of where the needle is located”. I said “without any prior knowledge”.
As I tried to tell, THAT doesn't even make any sense. Joe
R0bb, thanks, appreciated. "But I disagree with Chance’s statement that the scenario is perfectly clear." I'll revise my statement to say that it was perfectly clear to me. "Why does he say “the probability of finding item 6 using this machine, once we factor in the probabilistic cost of securing the machine”, rather than simply saying “the probability of finding this machine AND finding item 6 with it”?" Perhaps he thought his intended audience would understand his meaning. I certainly did. "Why not just state the LCI as “P(A&B) ? P(B)” and call it good?" Yes, that's exactly the implication. I can only imagine that he wanted to be clear to his audience, that given circumstances of a search for a shortcut, one might actually do definitively worse, and he wanted to show why this might be the case. I found the example straightforward enough. DiEb @93, I didn't see your comment previously. I didn't intend to ignore. "1) Do you agree that the probability to find the target “6? using the two-layered system of at first choosing a machine at random and then let the machine choose the target is 1/6 ?" I said as much in #87, and provided an example at #97, agreeing that the total probabiliity comes out to 1/6, but disagreeing that it was relevant to Dembski's example of an intersection of events. "2) W. Dembski says: “So our attempt to increase the probability of finding item 6 by locating a more effective search for that item has actually backfired, making it in the end even more improbable that we’ll find item 6.” How do you square this with your answer to 1) ?" If the machine is chosen specifically for its property of increasing the probability of securing item 6, then it comes with a 1/6 cost. If one cares not which machine one obtains, then there is no intersection to account for, and the total probability applies. Dembski was clear that the incurred cost may negatively affect the outcome, not that it certainly would. By that qualification, it may or may not come with a cost:
Conservation of information says that this is always a danger when we try to increase the probability of success of a search -- that the search, instead of becoming easier, remains as difficult as before or may even, as in this example, become more difficult once additional underlying information costs, associated with improving the search and often hidden, as in this case by finding a suitable machine, are factored in.
Those comments are very much in context to the example provided. That example was merely to demonstrate what might constitute a "cost" by necessitating the securing of the 'six' machine. onlooker @104, my #97 illustrates what I see as the difference. Glad to hear that you enjoyed your weekend. :-) Chance Ratcliff
Tangents, tangents . . . kairosfocus
Which sets were improperly defined, and what was improper about them?
They were set in jello? Mung
Ok R0bb- your math is invalid because your your sets were imporperly defined.
Which sets were improperly defined, and what was improper about them? R0bb
Joe:
R0bb? Is that it then?
That depends. Do you plan on answering the following questions that I've asked? #31 "Have you read the examples and proofs in Dembski's work?" #43 "As I pointed out in #31, endogenous probability is defined as |T|/|Ω|, not |T|/(number of choices). Do you agree that this is how it's defined?" #43 "You might be of the opinion that Ω is supposed to be defined such that |Ω| = number of choices, and therefore endogenous probability = |T|/|Ω| = |T|/(number of choices). Is that your position?" #43 "If that is your position, do you believe that Dembski and Marks always define Ω such that |Ω| = number of choices?" #43 "If that is not your position, why do you think that the endogenous probability is |T|/(number of choices)?" #43 "Consider the concept of "Brillouin active information" ... Why would Dembski and Marks define a measure that's always zero?" #44 "Pardon my thick skull, but are you referring to my random walk example? If so, what aspect of the random walk are you describing as a roll of the die? Or are you talking about the die example in my second post at TSZ?" #44 "No, it means that if we always exclude zero-probability outcomes from Ω, then in some cases active information will decrease when a search improves. Do you agree?" #44 "As for including numbers 7 - infinity (by which I assume you mean all integers greater than 6, none of which is actually infinity), is there any reason not to do so, other than inconvenience?" #45 "Can you provide definitions for your terms 'muddled' and 'properly defined'? A distribution is muddled iff __________________________. A distribution is properly defined iff __________________________." #57 "Where exactly did I mangle what Dembski said?" #57 "Consider some of their other examples of active information, like Marks' example of finding a good recipe for boiling an egg. ... Do you think Marks would agree that the active info is zero?" #57 "Also consider Dembski's oft-used example of a treasure map. Them map eliminates all outcomes except for one. Does the map have zero active information?" #68 "In #53 you said that my examples were mathematically valid. Have you changed your mind?" #69 "And BTW, how does Dembski's example apply to the real world?" #69 "How do his three CoI theorems apply to the real world?" #69 "If any of Dembski's examples or theorems don't apply to the real world, is he being uncivil by bringing them up?" And a question that I've asked three times: "If the LCI fails in mathematically valid cases, is it a true law?" And the most important questions, from #58: "1. Are you interested in doing the work it takes for us to understand each other?" "2. Are you interested in having a civil discussion, free of taunts?" R0bb
Dieb, Chance, and others: I agree with Chance, in that I think Dembski's intent is to say that the probability of A AND B is 1/12. I think this because he is more explicit about it in the section "The LCI Regress" on page 25 of this paper. The LCI Regress amounts to nothing more than the fact that hitting two targets is less likely than hitting just one. But I disagree with Chance's statement that the scenario is perfectly clear. I think Dembski could have easily made it more clear. Why does he say "the probability of finding item 6 using this machine, once we factor in the probabilistic cost of securing the machine", rather than simply saying "the probability of finding this machine AND finding item 6 with it"? Why speak sometimes in terms of "cost" and other times in terms of "probability", sometimes in the same sentence, when all he's talking about is probability? (For that matter, why does he use words like "complexity", "information", or "difficulty" when he means improbability? Some readers, including some IDists, take "complex" to mean complicated, "information" to mean symbols that convey meaning, and Joe thinks that Dembksi is referring, at least in part, to physical difficulty.) Why not just state the LCI as "P(A&B) ≤ P(B)" and call it good? R0bb
Joe:
A different 16. A 16 that does NOT include those 3: - A is the uniform distribution over the rightmost four squares in the search space. - B is the uniform distribution over the bottom twelve squares in the search space.
But 3 of the squares in A are also in B. If you're correct that Ω = A∪B, then |Ω| = 13, not 16. (Unless you're thinking that Ω is a multiset, but you realize that a sample space can't be a multiset, right?)
Is it easier to roll a die to get your 6, a 1/6 probability than it is to also have to search for a machine that will allow you to find your 6 by say flipping a coin? And if there are 6 machines out there that will help you find your 6, but only one is the 50/50, is that less or more difficult than just rolling a die?
As has been pointed out by Dieb and others, the probability is the same whether you search for the 6 by rolling the die directly or search for the 6 indirectly via the machines. Are you under the impression that when Dembski says "difficulty", he's referring to something more than just improbability?
Nope. My point was that someone without any prior knowledge of where the needle is located can use a metal detector to enhance his/ her chances of having a successful search- ie making it a much better chance of success than a blind search.
(Emphasis mine.) But I didn't say "without any prior knowledge of where the needle is located". I said "without any prior knowledge". To only consider prior knowledge of where the needle is located, and ignore the prior knowledge used to build a metal detector, is an example what Dembski and Marks call a "familiarity zone". R0bb
Ok R0bb- your math is invalid because your your sets were imporperly defined. That is why we don't leave math up to people intent on messing it up. Joe
Joe:
It is mathematically valid to say that 2 unicorns plus 2 unicorns = 4 unicorns. However in the real world there aren't any unicorns.
The only non-mathematical objects I've used in my examples are dice, boxes, and coins, which do exist in the real world. But that's beside the point. The point is that it's easy to show mathematical counterexamples to Dembski's mathematical "law". You said that my examples were mathematically valid, but when I asked "If the LCI fails in mathematically valid cases, is it a true law?", you said, "Can something be mathematically valid when applied to something that is invalid, such as your mangled sets?" So which is it -- are the counterexamples mathematically valid or not? Your follow-up questions were, "If so then how does it apply to the real world? And if it doesn't apply then why even bring it up unless you are not interested in a civil discussion?" My examples server the same purpose as Dembski and Marks' examples, namely to illustrate the math. They are not intended to show a useful application of the math to the real world. As I asked before, how does Dembski's dice-and-machines example apply to the real world? R0bb
onlooker, Which of your links contains evidence that blind and undirected processes can produce CSI? And if none of them do then what is the refutation, exactly? Joe
onlooker, R0bb smooched the pooch- notice that he hasn't returned once his folly was exposed. Joe
Chance, Sorry for the delay in replying. I was enjoying the long weekend and, when I finally got around to writing a response, got distracted by some additional reading on Dembski's "Law of Conservation of Information". More on that below.
Either Dembski is talking about the probability of finding the target after the right machine is found, a true conditional probability, in which case the answer is .5 or he is talking about the “final cost”, which is the total probability of the two step process of choosing a machine and then asking it for a value, in which case the answer is 1/6.
Or he is talking about the probability of both finding the correct machine (1/6) and finding the correct item after finding the correct machine (1/2), which actually what he said.
I agree that is what he said. My claim is that is exactly his error. By ignoring the fact that the other machines can produce the target, Dembski changes the definition of the problem midstream. The change in odds he calculates isn't due to the actual probability distribution but to an arbitrary choice to ignore possible solutions. That makes his "cost" incorrectly high. I suspect that we're doing the equivalent of arguing about angels dancing on pins, though. As I mentioned, while writing this response to you, I read through Dembski's LCI paper and discussions of it available online. I found several quite thorough refutations of it, including these: http://www.talkorigins.org/faqs/information/dembski.html#Conservation_of_Information http://scienceblogs.com/goodmath/2009/05/07/so-william-dembski-the/ http://scienceblogs.com/goodmath/2008/12/19/fitness-landscapes-evolution-a/ http://mfinmoderation.wordpress.com/2010/08/21/comments-on-the-law-of-conservation-of-information/ http://www.sciencemeetsreligion.org/evolution/information-theory.php http://mfinmoderation.wordpress.com/2010/08/21/comments-on-the-law-of-conservation-of-information/ The last one listed contains links to additional discussion of the issue. Have you read any of these? If so, I'd be interested in hearing your thoughts. Some of the points raised in those reviews are echoed by R0b in three posts at The Skeptical Zone. While I mostly lurk there, I'm sure your participation would be welcome. onlooker
Ah, yup, something like" "the cost of the search for a [good] search." kairosfocus
Kairosfocus- They can't even address the fact that their distraction point also demonstrates it is easier to role a die than it is to search for and locate the machines- just as Dembski said. However if you are giving points for being belligerent they are racking up the points. Bless their pointy little heads Joe
F/N: this is evidently yet another case of a red herring distractor led off to a strawman side issue. Notice, just how little of the above actually addresses the main point. KF kairosfocus
Chance- I am just saying it could be clearer and not lend any ammunition for any distractions from the point. Right now there is too much of one (distraction) and not enough of the other (discussion). It is bad enough that our opponents focus on the hand and finger that are doing the pointing and not the idea/ concept that is being pointed out. To just feed them stuff they can use to take other people's focus off the point is just poor planning. Unless, of course it was part of the plan so you could gather data studying those type of people-> people forced to dstract from the point by arguing the irrelevant minutia. Just sayin'... ¥ Joe
Joe, I believe the scenario is perfectly clear, and I've nothing to add that hasn't been said already. If Dr. Dembski believes clarification or qualification is required, perhaps he'll offer it here. I'll check back later. Peace. Chance Ratcliff
Chance, In Dembski's scenario we can get the final target of a 6 without securing the "6" machine. And we would never know if the 6 we got was from the "6" machine or any of the other 5 machines. And if you can't tell the machines apart- isomorphic/ the principle of indifference- then the ONLY thing you will know is if your search was successful or not. You will never know why. That said the rest of what you said is correct given a specific scenario- that being the search can only continue IFF the "six" machine is chosen at the first level or the other machines do not offer a chance at the target 6. Joe
Joe, I don't think it's a problem. If we can know whether or not we obtained item 6, then we can know whether or not we obtained the 'six' machine. In Dembski's example, we're only interested in the outcome when the 'six' machine is secured, because we're not looking for a 'five' machine or any other. We throw out bad picks because they don't provide the boost in outcome that we're looking for, namely the increased probability of obtaining item 6. So in this regard we pay the probabilistic cost, 1/6, of obtaining the 'six' machine. Imagine a little man -- not little in stature, but little in mind, because he believes he can outmaneuver probability with shenanigans. His lottery is for item 6, and he wishes to secure a machine that will increase his chances of a jackpot from 1/6 to 1/2. So he searches among the various machines until he finds the 'six' machine. If he at first finds a 'four' machine, or any other, he tosses it out, and proceeds looking until the desired machine turns up. So he incurs the 1/6 penalty in order to increase his chances to 1/2. When he finds the 'six' machine, the complete cost of success is 1/12. And this is what Dembski was talking about -- the probabilistic cost of both finding the correct machine and then winning the lottery is indeed 1/12. He wasn't examining the undesired outcomes because they were not relevant. Under Dembski's scenario, if one wishes to find item 6 with a 1/2 probability, one must first secure a 'six' machine at a cost of 1/6. If instead our man were to disregard the machine type, and just take whichever machine he found at first, he would do no better, but no worse, than the original lottery, because each of the other five machines still offer a chance of scoring item 6, albeit at reduced rate of success for each individual machine. However this is not an example that is irrespective of machine choice, it is explicitly one in which the 'six' machine is first found, and then the lottery is conducted. So there's nothing confusing or erroneous about his example -- he lays out the probability for event A and B occurring, where A is the event for securing the correct machine, and B is the event for obtaining item 6. For both outcomes to occur, P(A∩B), there is a certain 1/12 probability of success. Dembski's explanation matches his math. If one wishes to know the probability of both A and B occurring, one must take the intersection of the two events, not the total probability. His statement is clear that, "The probability of finding item 6 using this machine, once we factor in the probabilistic cost of securing the machine, therefore ends up being 1/6 x 1/2 = 1/12.". If he were referring to the total probability of finding the 'six' machine, he wouldn't have used language that indicates a more specific outcome. He made no error, and he is not confused about the math with regard to his statement. They match up perfectly. P(B∩A) = P(B|A)*P(A) = 1/2 * 1/6 = 1/12. Chance Ratcliff
Dr Dembski- for what its worth, in your example:
To see how this works, let's consider a toy problem. Imagine that your search space consists of only six items, labeled 1 through 6. Let's say your target is item 6 and that you're going to search this space by rolling a fair die once. If it lands on 6, your search is successful; otherwise, it's unsuccessful. So your probability of success is 1/6. Now let's say you want to increase the probability of success to 1/2. You therefore find a machine that flips a fair coin and delivers item 6 to you if it lands heads and delivers some other item in the search space if it land tails. What a great machine, you think. It significantly boosts the probability of obtaining item 6 (from 1/6 to 1/2). But then a troubling question crosses your mind: Where did this machine that raises your probability of success come from? A machine that tosses a fair coin and that delivers item 6 if the coin lands heads and some other item in the search space if it lands tails is easily reconfigured. It can just as easily deliver item 5 if it lands heads and some other item if it lands tails. Likewise for all the remaining items in the search space: a machine such as the one described can privilege any one of the six items in the search space, delivering it with probability 1/2 at the expense of the others. So how did you get the machine that privileges item 6? Well, you had to search among all those machines that flip coins and with probability 1/2 deliver a given item, selecting the one that delivers item 6 when it lands heads. And what's the probability of finding such a machine? To keep things simple, let's imagine that our machine delivers item 6 with probability 1/2 and each of items 1 through 5 with equal probability, that is, with probability 1/10. Accordingly, this machine is one of six possible machines configured in essentially the same way. There's another machine that flips a coin, delivers item 1 from the original search space if it lands heads, and delivers any one of 2 through 6 with probability 1/10 each if the coin lands tails. And so on. Thus, of these six machines, one delivers item 6 with probability 1/2 and the remaining five machines deliver item 6 with probability 1/10. Since there are six machines, only one of which delivers item 6 (our target) with high probability, and since only labels and no intrinsic property distinguishes one machine from any other in this setup (the machines are, as mathematicians would say, isomorphic), the principle of indifference applies to these machines and prescribes that the probability of getting the machine that delivers item 6 with probability 1/2 is the same as that of getting any other machine, and is therefore 1/6. But a probability of 1/6 to find a machine that delivers item 6 with probability 1/2 is no better than our original probability of 1/6 of finding the target simply by tossing a die. In fact, once we have this machine, we still have only a 50-50 chance of locating item 6. Finding this machine incurs a probability cost of 1/6, and once this cost is incurred we still have a probability cost of 1/2 of finding item 6. Since probability costs increase as probabilities decrease, we're actually worse off than we were at the start, where we simply had to roll a die that, with probability 1/6, locates item 6. The probability of finding item 6 using this machine, once we factor in the probabilistic cost of securing the machine, therefore ends up being 1/6 x 1/2 = 1/12. So our attempt to increase the probability of finding item 6 by locating a more effective search for that item has actually backfired, making it in the end even more improbable that we'll find item 6. Conservation of information says that this is always a danger when we try to increase the probability of success of a search -- that the search, instead of becoming easier, remains as difficult as before or may even, as in this example, become more difficult once additional underlying information costs, associated with improving the search and often hidden, as in this case by finding a suitable machine, are factored in.
You should have had the other 5 machines give a zero probability of hitting a 6- no output at all. That way there is definitely a suitable machine that you are searching for and ignoring the other 5 is valid in a real world scenario. Otherwise, the way it looks as it stands, there isn't any difference, probability wise, in making your pick of a machine and then getting a 6. Yes it is far more difficult in searching for the machines, then it is to roll a die and you have to factor in the probabilities in finding one (or six), but once you have the machines you specified the probabilities of success are (about) the same regardless of the machine you choose. The search is more difficult, physically, to be sure. But that appears to be only part of your point. Joe
R0bb? Is that it then? Joe
I don’t know what you mean by “part of the equation”, but those 3 are still part of ?.
Ω changes.
If you read the example, |?| is always 16.
A different 16. A 16 that does NOT include those 3:
• A is the uniform distribution over the rightmost four squares in the search space. • B is the uniform distribution over the bottom twelve squares in the search space.
The three in the top row starting in the left corner are no longer part of the search grid. Now that that has been exposed let's move on to the question of: "is it easier to roll a die to get your 6, a 1/6 probability than it is to also have to search for a machine that will allow you to find your 6 by say flipping a coin? And if there are 6 machines out there that will help you find your 6, but only one is the 50/50, is that less or more difficult than just rolling a die?"
You’re the one who claimed that an intelligent being could, without any prior information, create a metal detector.
Nope. My point was that someone without any prior knowledge of where the needle is located can use a metal detector to enhance his/ her chances of having a successful search- ie making it a much better chance of success than a blind search. Joe
@Chance Ratcliff 1) Do you agree that the probability to find the target "6" using the two-layered system of at first choosing a machine at random and then let the machine choose the target is 1/6 ? 2) W. Dembski says: "So our attempt to increase the probability of finding item 6 by locating a more effective search for that item has actually backfired, making it in the end even more improbable that we'll find item 6." How do you square this with your answer to 1) ? DiEb
onlooker, I know, the irony. ;-) "Either Dembski is talking about the probability of finding the target after the right machine is found, a true conditional probability, in which case the answer is .5 or he is talking about the “final cost”, which is the total probability of the two step process of choosing a machine and then asking it for a value, in which case the answer is 1/6." Or he is talking about the probability of both finding the correct machine (1/6) and finding the correct item after finding the correct machine (1/2), which actually what he said. You can disagree with the claim by insisting that the total probability applies, or you can disagree by insisting that the conditional probability must be taken solely. But what you cannot refute is that Dembski is specifically referring to the intersection of two events, for which the probability is 1/12. And he is correct. P(A∩B) = 1/12, unless your math comes out differently. I've quoted Demski, and I've shown why the math matches his claim as stated. So the idea that he made some elementary math error is without merit. If you lot would just acknowledge this simple little fact, I'll stop reminding everyone that P(A∩B) really does equal 1/12, and that Dembski really was talking about the probability of P(A∩B). Maybe then it would be appropriate to discuss whether the probability of the intersection is less relevant than total probability. However in either case, the LCI holds, so I'm not sure what the fuss is about, unless it was to show that Dembski made some elementary math error -- which he didn't. Chance Ratcliff
Chance, Discussing probability with someone named Chance amuses me. ;-)
Again, he’s claiming that the final cost of B after securing the correct machine is 1/12, and he’s right. It’s the probability of both B and A occurring.
You can't have it both ways. Either Dembski is talking about the probability of finding the target after the right machine is found, a true conditional probability, in which case the answer is .5 or he is talking about the "final cost", which is the total probability of the two step process of choosing a machine and then asking it for a value, in which case the answer is 1/6. The core error that you and Dembski are making is that there is not one single correct machine. All of the machines have a non-zero probability of returning the target. That cannot be just ignored. onlooker
Joe:
Those 3 are no longer part of the equation. That was the whole point, the probabilities just got lower because the search space was narrowed.
I don't know what you mean by "part of the equation", but those 3 are still part of Ω. If you read the example, |Ω| is always 16. And the probabilities get higher, not lower, when the searchable area narrows (assuming that the area still contains the target).
Can an intelligent being, without any prior information, exist? Would such a being even care or know about searching?
You're the one who claimed that an intelligent being could, without any prior information, create a metal detector. Have you changed your mind?
That doesn’t say we add the two conditional probabilities together to get the total probability.
It says that the total probability is the weighted sum of the conditional probabilities. That's what Dieben did.
There should only be one final state-> the target. Otherwise you keep shifting until then, right? And if you are still shifting then you aren’t in a final state, which would be stable.
No, the model consists of only one transition. If you look at the state diagram, you'll see that there are no loops. In Dembski's model, a search consists of a single event. R0bb
The law of total probability doesn't seem to apply DiEB. Ya see there are TWO targets, not one. The first target is picking the the machine that gives you a 1/2 chance of getting a 6. And the final target is getting the 6. The first target has a 1/6 chance and the second has a 1/2 chance. You multiply those together to get a 1/12 chance. One more time- the law of total probability from wikipedia:
In probability theory, the law (or formula) of total probability is a fundamental rule relating marginal probabilities to conditional probabilities.
That doesn’t say we add the two conditional probabilities together to get the total probability. Joe
W. Dembski:
The probability of finding item 6 using this machine, once we factor in the probabilistic cost of securing the machine, therefore ends up being 1/6 x 1/2 = 1/12. So our attempt to increase the probability of finding item 6 by locating a more effective search for that item has actually backfired, making it in the end even more improbable that we'll find item 6.
Yet often, as in this example, we may actually do worse by trying to improve the probability of a successful search.
The part in bold fond isn't true - it isn't more improbable to find item 6, it is exactly as improbable as before, we don't do actually worse. @Chance Ratcliff: IMO the final cost of B after securing the correct machine would be P(B|A) and not P(B?A). "I found the target" - "Sure, but you didn't use the right machine, so we can't give you any points for the answer" - that's just not how it works.... DiEb
onlooker, I'm simply pointing out that Dembski's claim as stated, is that P(B∩A) = 1/12, and he's correct. Event A: getting the correct machine. Event B: getting the correct item (#6). P(A) = 1/6 P(B|A) = 1/2 P(B∩A) = P(B|A)*P(A) = 1/2 * 1/6 = 1/12 The total probability for event B is 1/6 (and will not go higher in the given example) but Dembski is not referring to the total probability. If you don't agree with his claim, fine -- but the math, as it corresponds to the claim, is correct. Again, he's claiming that the final cost of B after securing the correct machine is 1/12, and he's right. It's the probability of both B and A occurring. Chance Ratcliff
onlooker:
That means that you can’t simply use 1/6 as the probability of getting the right machine because the other machines each also have a probability of .1 to return the target.
How many machines are there? 6 And how many do we get to choose? 1 = 1/6. Every machine has a 1/6 chance of being chosen. Joe
Given the two-tiered search- level 1 for the machine and level 2 for the 6- each of the six pathways (pathway being via one of thesix machines) to 6 starts with a probability of 1/12. And that is half as good as the original. If you get the correct machine at level one then your odds jump to 1/2, which is 3 times better than the original. And if you get one of the other machines your odds step to 1/10, which is 2/3 worse than the original. And the odds of getting a path that is worse than the original is 5/6. So you got to ask yourself, do you feel lucky, punk? :) Joe
Chance,
Dembski is referring to the total cost of finding item 6 after having secured the correct machine.
"The probability of finding item 6 using this machine, once we factor in the probabilistic cost of securing the machine, therefore ends up being 1/6 x 1/2 = 1/12."
Using cost as a metaphor (or worse, reifying it) is causing confusion. You are interpreting Dembski as changing the problem from finding a target using six available machines to finding a machine that finds the target with probability .5 and ignoring all the others. If you want to do that, you still have to take the probability that the other machines will find the target into account when computing the "probabilistic cost". That means that you can't simply use 1/6 as the probability of getting the right machine because the other machines each also have a probability of .1 to return the target. If "probabilistic cost" has any real meaning, based on my brute force approach above I predict you'll find that the actual value you get when you take those other probabilities into account will be 1/3 rather than 1/6. You'll have to define concept more clearly first, though. onlooker
As mentioned by Joe, LCI holds even for P(B). We can add searches for searches for searches, and the probability does not increase beyond the original discrete probability of 1/6. But if we're paying the cost of having found those searches, the probability drops significantly. Chance Ratcliff
Dembski is referring to the total cost of finding item 6 after having secured the correct machine.
“The probability of finding item 6 using this machine, once we factor in the probabilistic cost of securing the machine, therefore ends up being 1/6 x 1/2 = 1/12.”
Taking into account what he's actually saying: P(B|A) = 1/2 P(A) = 1/6 P(B∩A) = P(B|A)*P(A) = 1/2 * 1/6 = 1/12 He is not referencing the total probability of event B, as in P(B) = 1/6. If we wish to secure a machine that increases our chances of finding item 6 to 1/2, we must first pay 1/6 to do so. The cost of finding the desired item then is 1/12. He's referencing the probability of events A and B occurring. Dembski's math supports his claim as stated. Chance Ratcliff
onlooker, Chance Ratcliff explained it above. Also what Dembski used is called conditional probabilities and it works-> You multiply the odds of each level to get the final odds. Also 1/6 = 1/6, so even if you and DiEB are right, Dembski is still correct as we didn't find a more effective search. You do understand that a more effective search would have a higher probability of success than the original... Joe
R0bb writes
Joe:
Please explain why your “+” should not by a “*”.
See Law of Total Probability.
Dembski's error is simple enough to demonstrate by brute force. One of his machines produces a particular value with probability .5 and each of the other values with probability .1. This is the same as picking one value at random from a set of integers like this: 11111 23456 There are six of these machines, so the total number of sets of integers is: 11111 23456 22222 13456 33333 12456 44444 12356 55555 12346 66666 12345 Now, there are two simple ways to see the problem. The first is that when Dembski says "The probability of finding item 6 using this machine, once we factor in the probabilistic cost of securing the machine, therefore ends up being 1/6 x 1/2 = 1/12" it is clear that he is ignoring half the 6s, namely those listed on the right. An even easier way is to recognize that picking a machine at random and getting a value at random from that machine is equivalent to simply picking one integer at random from all the integers listed. There are 60 integers, 10 each of 1, 2, 3, 4, 5, and 6. The probability of finding the target value is therefore 1/6, as calculated by DiEB. Dembski made an error. Hey, it happens to the best of us. Unfortunately for his thesis, that error means that his conclusion that "So our attempt to increase the probability of finding item 6 by locating a more effective search for that item has actually backfired, making it in the end even more improbable that we'll find item 6." is incorrect. The impact of this on his broader claims about "conservation of information" may or may not be significant. onlooker
Indeed, Rob, you made the point before. And W. Dembski could argue that - assuming that an addition step is costly - the combination of the meta-search and the search is more costly than performing the search alone, without gaining a better expectation of success. But he says Yet often, as in this example, we may actually do worse by trying to improve the probability of a successful search. - and that's just not true: the probability of a successful search hasn't changed. DiEb
Ω Ω Thanks Chance, I can see the symbols in the preview sceen Joe
In #53 you said that my examples were mathematically valid.
It is mathematically valid to say that 2 unicorns plus 2 unicorns = 4 unicorns. However in the real world there aren't any unicorns. Joe
And I never said that there are 5 possible final states for a given initial state. I said there are 5 final states.
There should only be one final state-> the target. Otherwise you keep shifting until then, right? And if you are still shifting then you aren't in a final state, which would be stable. Joe
In probability theory, the law (or formula) of total probability is a fundamental rule relating marginal probabilities to conditional probabilities.
That doesn't say we add the two conditional probabilities together to get the total probability. Joe
R0bb, Can an intelligent being, without any prior information, exist? Would such a being even care or know about searching? Joe
In that case, Bernoulli’s PrOIR would have the search for T characterized by a probability measure ?2 (? M2(?)) that assigns probability 1/2 to both A and B.
The search confers a probability of 1/2 on subset A and a probability of 1/2 on subset B, which means it confers a probability of zero on the 3 squares that are in neither A nor B.
Those 3 are no longer part of the equation. That was the whole point, the probabilities just got lower because the search space was narrowed. Joe
Also Joe, I forgot to ask, how does a person invent a metal detector without any prior information, which would mean no information about the facts of science and engineering? R0bb
Joe:
Please explain why your “+” should not by a “*”.
See Law of Total Probability. R0bb
Dieben:
I see a tiny problem at Dr. Dembski’s toy example
Thanks, Dieben. I noticed that too, and brought it up in comment #20. It seems that when Dembski says that the probability of finding a target decreases when we factor in the cost of the higher-level search, he really means that the probability of finding the higher-level target AND finding the lower-level target is smaller than the probability of directly finding the lower-level target R0bb
BTW Joe, if you don't like my example of an LCI violation, you can look at Dembski's own such example. On rereading the example in section 1.1.4 of the S4S paper, I see an LCI violation that I hadn't noticed before. Search B has 2 bits of active information, and the endogenous information of finding search B is 1 bit. So instead of arguing about the validity of my example, let's just use Dembski's. And BTW, how does Dembski's example apply to the real world? How do his three CoI theorems apply to the real world? (He claims that his Bora Bora example is a special case of his function-theoretic CoI theorem, but this is in fact an error.) If any of Dembski's examples or theorems don't apply to the real world, is he being uncivil by bringing them up? R0bb
Joe:
There are 5 POSSIBLE final states if and only if the ONE item can be in all three of the initial starting points at the same time.
And I never said that there are 5 possible final states for a given initial state. I said there are 5 final states. If you inferred from that statement that all 5 states are accessible to a given initial state, I'm sorry -- that wasn't my intention.
Please quote the part that says that. I cannot find anything that comes close to saying that.
Here is the quote:
In that case, Bernoulli’s PrOIR would have the search for T characterized by a probability measure Θ2 (∈ M2(Ω)) that assigns probability 1/2 to both A and B.
The search confers a probability of 1/2 on subset A and a probability of 1/2 on subset B, which means it confers a probability of zero on the 3 squares that are in neither A nor B.
Can something be mathematically valid when applied to something that is invalid, such as your mangled sets?
In #53 you said that my examples were mathematically valid. Have you changed your mind? And you still haven't answered the question: If the LCI fails in mathematically valid cases, is it a true law R0bb
DiEB:
The probability for a success is 1/6 * 1/2 + 5/6 * 1/10 = 1/6. So the problem didn’t become more difficult.
But you only get one choice. Either you pick the 1/2 machine OR you pick a 1/10th machine. So it wouldn't be changing the "+" to a "*", you just delete the "+" and the rest of the stuff on the right of it. What that says is that either pick will give you the same odds at achieving your goal. Which means if you had two picks you would double your chances, as DiEB said. Joe
RE: #56 "I see a tiny problem at Dr. Dembski’s toy example: Could you please correct your miscalculation, Dr. Dembski?" The total probability of securing item 6 is indeed 1/6, but Dembski's qualification is clear:
"The probability of finding item 6 using this machine, once we factor in the probabilistic cost of securing the machine, therefore ends up being 1/6 x 1/2 = 1/12."
This is correct. We wanted to specifically secure item 6, so we incurred the probabilistic cost of finding the correct machine that would increase our chances to 1/2. This gives us 1/12 probability of securing item 6. P(A) = 1/6 and P(B|A) = 1/2. So given that event A occurred, at a cost of 1/6, event B costs an additional 1/2. If we pay 1/6 for the required machine, our chances of getting item 6 are 1/12. Dembski's not referring to the total probability of securing item 6; he's made a clear qualification that our chances have been reduced once we have already paid the cost of securing the desired machine. Chance Ratcliff
I wrote an email to Dr. Dembski, using the address listed at evoinfo.org/people . Unfortunately wdembski [at] swbts [dot] edu doesn't work any longer. DiEb
Joe:
Please explain why your “+” should not by a “*”.
It's called the Law of total probability. Consider the events: T: the target 6 is identified and S: the better machine is chosen Then P(S) = 1/6, P(T|S) = 1/2, P(T|not(S)) = 1/10 and by said law we get: P(T) = P(T|S)*P(S) + P(T|not(S))*P(not(S)) = 1/2 * 1/6 + 1/10 * 5/6 = 1/6 DiEb
R0bb: First, I suggest you take a look at the second law of thermodynamics, from the statistical end. You will see that it is about fluctuations, and it is about populations and what happens to fluctuations as pop goes up. Second, there are statistical laws that are perfectly valid being stated as expected outcomes. Then, bring to best the fluctuations issues as pop size goes up enough and you see where something that is strictly mathematically/ logically/ physically possible becomes observationally so utterly implausible as to be reliably not the case. If you will look at comment 70 of the 18 Q's thread, here, you will see a clip from WmAD's recent made simple article:
Mathematically speaking, search always occurs against a backdrop of possibilities (the search space), with the search being for a subset within this backdrop of possibilities (known as the target). Success and failure of search are then characterized in terms of a probability distribution over this backdrop of possibilities, the probability of success increasing to the degree that the probability of locating the target increases . . . . Take an Easter egg hunt in which there’s just one egg carefully hidden somewhere in a vast area. This is the target and blind search is highly unlikely to find it precisely because the search space is so vast. But there’s still a positive probability of finding the egg even with blind search, and if the egg is discovered, then that’s just how it is. It may be, because the egg’s discovery is so improbable, that we might question whether the search was truly blind and therefore reject this (null) hypothesis. Maybe it was a guided search in which someone, with knowledge of the egg’s whereabouts, told the seeker “warm, warmer, no colder, warmer, warmer, hot, hotter, you’re burning up.” Such guidance gives the seeker added information that, if the information is accurate, will help locate the egg with much higher probability than mere blind search — this added information changes the probability distribution . . . . The Easter egg hunt example provides a little preview of conservation of information. Blind search, if the search space is too large and the number of Easter eggs is too small, is highly unlikely to successfully locate the eggs. A guided search, in which the seeker is given feedback about his search by being told when he’s closer or farther from the egg, by contrast, promises to dramatically raise the probability of success of the search. The seeker is being given vital information bearing on the success of the search. But where did this information that gauges proximity of seeker to egg come from? Conservation of information claims that this information [the guide for the search] is itself as difficult to find as locating the egg by blind search, implying that the guided search is no better at finding the eggs than blind search once this information must be accounted for . . .
The language above is clearly about expectations under reasonable circumstances, and about empirical reliability of a principle. It is explicit that it is logically and physically possible for blind search to succeed. But on the scale of the space to be searched and the relative isolation of the hot zone, there is a maximal implausibility of blind search succeeding to the point where an alleged blind search that is successful is suspect for cheating. The matter then moves to the issue of guiding the search that enhances the probability of success, and shows how if the search is to be found blindly -- i.e without intelligence -- then it is subject to a search itself that is comparably difficult as the direct search or worse. It seems to me that the example given, albeit a toy, aptly shows that. So, WmAD's remarks seem to me unexceptional in what hey are affirming and the cautions they point to. I think your critiques need to be rebalanced in that light. There are such things as expected outcomes that are so weighted by the balance of probabilities given the scope of a relevant space that there is no good reason to expect to observe a truly improbable outcome on the relevant scope of resources, lab, planet, solar system or observed cosmos. And when something is cast in such fundamentally thermodynamics terms, to point out what is mathematically possible or what happens with toy examples to the contrary of the overwhelming expectation, is distractively irrelevant to the point where it can easily become a red herring, strawman fallacy. KF PS: I suggest you look here on in my always linked note, on the relevant thermodynamics perspective. kairosfocus
DiEB:
Can you spot the error in his calculation? The probability to find the correct machine and then the target is indeed 1/12, but the probability to find the target via chosing a machine at random at first is 1/6, thanks to the symmetry of the problem: The probability for a success is 1/6 * 1/2 + 5/6 * 1/10 = 1/6. So the problem didn't become more difficult.
Please explain why your "+" should not by a "*". Joe
If the LCI fails in mathematically valid cases, is it a true law?
Can something be mathematically valid when applied to something that is invalid, such as your mangled sets? If so then how does it apply to the real world? And if it doesn't apply then why even bring it up unless you are not interested in a civil discussion? Joe
R0bb:
I did: Section 4.1.1 of the Search for a Search paper. |?| is 16, 3 of which have zero probability.
Please quote the part that says that. I cannot find anything that comes close to saying that. But obvioulsy we see things differently so I need your help. Joe
There are 5 final states.
There are 5 POSSIBLE final states if and only if the ONE item can be in all three of the initial starting points at the same time. So yes, we definitely have a communication problem. Gotta go.... Joe
Joe, we're having some serious communication problems. I've asked several questions throughout our discussion in an effort to find the points of communication breakdown. I realize that it would take some work to answer all of those questions, but I think that answering them is necessary in order for us to understand each other. So I have two more very important questions: 1. Are you interested in doing the work it takes for us to understand each other? 2. Are you interested in having a civil discussion, free of taunts? R0bb
Joe:
I never said there are 5 choices.
You are by saying there are 5 possible outcomes. That is wrong. YOU said:
Finally, p2 is 1/5 since the target consists of only one of the final five states.
There are 5 final states. For any given initial state, only 2 of the 5 final states are accessible.
Omega = 2 as that is the number of possibilities.
By "possibilities" I assume you mean outcomes with non-zero probability. But why must Ω contain only outcomes with non-zero probability? It's common in probability theory for samples spaces to contain zero probability outcomes.
For the LCI to qualify as a law, it has to hold up to any mathematically valid case that we throw at it.
LoL! No R0bb, something can be mathematically valid and be totally senseless to the real world.
I can't tell whether or not you agree with the statement to which you're responding, so I'll ask again the question I asked in #45: If the LCI fails in mathematically valid cases, is it a true law?
16 squares there R0bb. How did you arrive at 13?
Only 13 squares have non-zero probability in the alternate search.
Yes R0bb, you can mangle what Dembski said. Are you proud of that?
Where exactly did I mangle what Dembski said?
Mathematically valid but not properly stated. Obvioulsy you have having problems following along.
I don't know what your criterion is for deeming something "properly stated", or exactly how I failed to meet it, so I'll settle for it being mathematically valid.
Can you provide ONE example in which Dembski/ Marks include zero-probability outcomes from omega? If not then you don’t have a point other than demonstrating dishonesty.
I did: Section 4.1.1 of the Search for a Search paper. |Ω| is 16, 3 of which have zero probability. Consider some of their other examples of active information, like Marks' example of finding a good recipe for boiling an egg. There are 66 possibilities, 22 of which are zero probability in the alternate search. If we don't allow zero-probability outcomes in Ω, then the active information is zero. Do you think Marks would agree that the active info is zero? Also consider Dembski's oft-used example of a treasure map. Them map eliminates all outcomes except for one. Does the map have zero active information? R0bb
I see a tiny problem at Dr. Dembski's toy example: Could you please correct your miscalculation, Dr. Dembski? DiEb
F/N: Since some objectors may be tempted to be dismissive on what I mean by needle in haystack searches on steroids, cf. the estimate here at IOSE. (Cf also the remarks on islands of function here.) The linked shows in outline that the number of possible search operations of our solar system to the number of possibilities for just 500 bits, is as a straw sized sample to a haystack 1,000 light years on the side, using the time-tick of the fastest chemical interactions, and the typical estimate for the solar system's age and number of atoms. Such a blind search runs into the implications of sampling theory, that a reasonable sized but relatively small sample will by overwhelming probability pick up the BULK of the distribution, not special, specifically and independently describable isolated zones. This, for the same reasons in effect as were already outlined for looking at frames of sampling. To see why, imagine the exercise of making up a large bell shaped curve from bristol board or the like, dividing it into even stripes, carrying the tails out to say +/- 6 SD's. Now, go high enough that dropping darts would be essentially evenly distributed and drop a dart repeatedly. After about 30 hits, we would begin to see a picture of the bulk of the distribution, and after about 100 - 1,000 it would be pretty good. But the far tails will very seldom come up as the hits in the board will tend overwhelmingly to go where there is a lot of space to get hit. That, in a nutshell is the whole issue of how hard it is to get to CSI by chance based contingency. And oh yes there is a debate as to whether that dropped dart "actually" pursues a deterministic trajectory on initial and intervening circumstances. So, how could the result be a chance based random pretty flat distribution yielding a result proportionate to relative area? Let's go back to how my Dad and colleagues in Statistics 50 years ago would use a phone book as a poor man's random number table. The assignment of phone numbers is absolutely deterministic, based on the technology used. Surnames and given names are not random either. But, within a given local switching office [the first three digits of a 7-digit number] there is no credible correlation between names and local loop numbers on the whole [the last four digits]. So, by going to a page at random and picking the first number there to guide to another number that then sets the number of pages forward and back respectively to pick a number there will be a succession of effectively random 4-digit digit numbers. Similarly, the digits of pi in succession are absolutely deterministic, but since there is no correlation between pi and the decimal numbers, the successive bits are essentially randomly distributed. So, clashing, uncorrelated deterministic streams of events can easily give rise to random distributions. Dart dropping has the same effect and gives a good enough result. This BTW, is also why there will be some mutations that may well be random in effect, though the evidence that mutations may be functionally incorporated into the system should be reckoned with. Or, do you think this is just for the immune system? Indeed, to promote robust adaptability it would make sense to build in a mechanism to do adaptations by chance incremental variation and niche exploitation. But that adaptation is not to be confused with how to get to the underlying body plan in the first place. That puts us into search space challenge territory, easily. KF kairosfocus
No, it means that if we always exclude zero-probability outcomes from ?, then in some cases active information will decrease when a search improves.
Can you provide ONE example in which Dembski/ Marks include zero-probability outcomes from omega? If not then you don't have a point other than demonstrating dishonesty. Joe
As I said in the first sentence you quoted, I’m simply reiterating a point that Dembski used to make often.
Yes R0bb, you can mangle what Dembski said. Are you proud of that?
Are my “senseless” examples mathematically valid or not?
Mathematically valid but not properly stated. Obvioulsy you have having problems following along. Joe
But if you think that endogenous probability = |T|/(number of choices), then reading through their examples will correct that misconception. For example, in section 4.1.1 of the Search for a Search paper, the endogenous probability is 1/16, but the number of choices is 13. How do you reconcile that with your objection to my example?
16 squares there R0bb. How did you arrive at 13? Joe
R0bb:
I never said there are 5 choices.
You are by saying there are 5 possible outcomes. That is wrong. YOU said:
Finally, p2 is 1/5 since the target consists of only one of the final five states.
There arev only 5 final states if all three of the initial states are taken by items that move.
As I pointed out in #31, endogenous probability is defined as |T|/|?|, not |T|/(number of choices).
Omega = 2 as that is the number of possibilities.
For the LCI to qualify as a law, it has to hold up to any mathematically valid case that we throw at it.
LoL! No R0bb, something can be mathematically valid and be totally senseless to the real world. Joe
corrected link: Rascal Flatts – “Bless The Broken Road” – Official Music Video http://www.youtube.com/watch?v=8-vZlrBYLSU bornagain77
Moreover, from our greater understanding of the nature of physical reality, the argument for God from consciousness can now be framed like this:
1. Consciousness either preceded all of material reality or is a 'epi-phenomena' of material reality. 2. If consciousness is a 'epi-phenomena' of material reality then consciousness will be found to have no special position within material reality. Whereas conversely, if consciousness precedes material reality then consciousness will be found to have a special position within material reality. 3. Consciousness is found to have a special, even central, position within material reality. 4. Therefore, consciousness is found to precede material reality. Three intersecting lines of experimental evidence from quantum mechanics that shows that consciousness precedes material reality https://docs.google.com/document/d/1G_Fi50ljF5w_XyJHfmSIZsOcPFhgoAZ3PRc_ktY8cFo/edit
i.e. Materialism had postulated for centuries that everything reduced to, or emerged from material atoms, yet the correct structure of reality is now found by modern science to be as follows:
1. material particles (mass) normally reduces to energy (e=mc^2) 2. energy and mass both reduce to information (quantum teleportation) 3. information reduces to consciousness (geometric centrality of conscious observation in universe dictates that consciousness must precede quantum wave collapse to its single bit state)
Of related interest, In the following video, at the 37:00 minute mark, Anton Zeilinger, a leading researcher in quantum teleportation with many breakthroughs under his belt, humorously reflects on just how deeply determinism has been undermined by quantum mechanics by saying such a deep lack of determinism may provide some of us a loop hole when they meet God on judgment day.
Prof Anton Zeilinger speaks on quantum physics. at UCT - video http://www.youtube.com/watch?v=s3ZPWW5NOrw
Personally, I feel that such a deep undermining of determinism by quantum mechanics, far from providing a 'loop hole' on judgement day, actually restores free will to its rightful place in the grand scheme of things, thus making God's final judgments on men's souls all the more fully binding since man truly is a 'free moral agent' as Theism has always maintained. And to solidify this theistic claim for how reality is constructed, the following study came along a few months after I had seen Dr. Zeilinger’s video:
Can quantum theory be improved? - July 23, 2012 Excerpt: Being correct 50% of the time when calling heads or tails on a coin toss won’t impress anyone. So when quantum theory predicts that an entangled particle will reach one of two detectors with just a 50% probability, many physicists have naturally sought better predictions. The predictive power of quantum theory is, in this case, equal to a random guess. Building on nearly a century of investigative work on this topic, a team of physicists has recently performed an experiment whose results show that, despite its imperfections, quantum theory still seems to be the optimal way to predict measurement outcomes., However, in the new paper, the physicists have experimentally demonstrated that there cannot exist any alternative theory that increases the predictive probability of quantum theory by more than 0.165, with the only assumption being that measurement (*conscious observation) parameters can be chosen independently (free choice, free will, assumption) of the other parameters of the theory.,,, ,, the experimental results provide the tightest constraints yet on alternatives to quantum theory. The findings imply that quantum theory is close to optimal in terms of its predictive power, even when the predictions are completely random. http://phys.org/news/2012-07-quantum-theory.html
So just as I had suspected after watching Dr. Zeilinger’s video, it is found that a required assumption of ‘free will’ in quantum mechanics is what necessarily drives the completely random (non-deterministic) aspect of quantum mechanics. Moreover it was shown in the paper that one cannot ever improve the predictive power of quantum mechanics by ever removing free will, or conscious observation, as a starting assumption in Quantum Mechanics! of note: *The act of ‘conscious observation’ in quantum mechanics is equivalent to 'measurement',,
What drives materialists crazy is that consciousness cannot be seen, tasted, smelled, touched, heard, or studied in a laboratory. But how could it be otherwise? Consciousness is the very thing that is DOING the seeing, the tasting, the smelling, etc… We define material objects by their effect upon our senses – how they feel in our hands, how they appear to our eyes. But we know consciousness simply by BEING it! - APM - UD Blogger
Of somewhat related interest, it is interesting to point out where I picked up the notion of 'empirically deprived mathematical fantasy' from: This following quote, in critique of Hawking's book 'The Grand Design', is from Roger Penrose who worked closely with Stephen Hawking in the 1970's and 80's:
'What is referred to as M-theory isn’t even a theory. It’s a collection of ideas, hopes, aspirations. It’s not even a theory and I think the book is a bit misleading in that respect. It gives you the impression that here is this new theory which is going to explain everything. It is nothing of the sort. It is not even a theory and certainly has no observational (evidence),,, I think the book suffers rather more strongly than many (other books). It’s not a uncommon thing in popular descriptions of science to latch onto some idea, particularly things to do with string theory, which have absolutely no support from observations.,,, They are very far from any kind of observational (testability). Yes, they (the ideas of M-theory) are hardly science." – Roger Penrose – former close colleague of Stephen Hawking – in critique of Hawking’s new book ‘The Grand Design’ the exact quote in the following video clip: Roger Penrose Debunks Stephen Hawking's New Book 'The Grand Design' - video http://www.metacafe.com/watch/5278793/
Also of related interest, here is a constraining factor that argues very strongly against the Darwinian notion of gradualism:
Poly-Functional Complexity equals Poly-Constrained Complexity Excerpt: Scientists Map All Mammalian Gene Interactions – August 2010 Excerpt: Mammals, including humans, have roughly 20,000 different genes.,,, They found a network of more than 7 million interactions encompassing essentially every one of the genes in the mammalian genome. http://www.sciencedaily.com/releases/2010/08/100809142044.htm https://docs.google.com/document/d/1xkW4C7uOE8s98tNx2mzMKmALeV8-348FZNnZmSWY5H8/edit
Music and verse:
Rascal Flatts - "Bless The Broken Road" - Official Music Video http://www.youtube.com/watch?v=kkWGwY5nq7A Romans 13:11 And do this, understanding the present time. The hour has come for you to wake up from your slumber, because our salvation is nearer now than when we first believed.
bornagain77
Robb you ask:
Are my “senseless” examples mathematically valid or not?
Exactly right question to ask! In order to establish validity for your mathematics, that they are in the realm of reality and are not in the realm of 'empirically deprived mathematical fantasy', I, once again, request that you present real world empirical evidence to show that functional information can be generated by material processes. Then you can, as far as empirical science is concerned, kill two birds with one stone. 1. You can falsify Dembski, Marks's, and companies, LCI, and 2., you can falsify Abel, Trevors Null hypothesis for functional information generation:
Three subsets of sequence complexity and their relevance to biopolymeric information - Abel, Trevors Excerpt: Three qualitative kinds of sequence complexity exist: random (RSC), ordered (OSC), and functional (FSC).,,, Shannon information theory measures the relative degrees of RSC and OSC. Shannon information theory cannot measure FSC. FSC is invariably associated with all forms of complex biofunction, including biochemical pathways, cycles, positive and negative feedback regulation, and homeostatic metabolism. The algorithmic programming of FSC, not merely its aperiodicity, accounts for biological organization. No empirical evidence exists of either RSC of OSC ever having produced a single instance of sophisticated biological organization. Organization invariably manifests FSC rather than successive random events (RSC) or low-informational self-ordering phenomena (OSC).,,, Testable hypotheses about FSC What testable empirical hypotheses can we make about FSC that might allow us to identify when FSC exists? In any of the following null hypotheses [137], demonstrating a single exception would allow falsification. We invite assistance in the falsification of any of the following null hypotheses: Null hypothesis #1 Stochastic ensembles of physical units cannot program algorithmic/cybernetic function. Null hypothesis #2 Dynamically-ordered sequences of individual physical units (physicality patterned by natural law causation) cannot program algorithmic/cybernetic function. Null hypothesis #3 Statistically weighted means (e.g., increased availability of certain units in the polymerization environment) giving rise to patterned (compressible) sequences of units cannot program algorithmic/cybernetic function. Null hypothesis #4 Computationally successful configurable switches cannot be set by chance, necessity, or any combination of the two, even over large periods of time. We repeat that a single incident of nontrivial algorithmic programming success achieved without selection for fitness at the decision-node programming level would falsify any of these null hypotheses. This renders each of these hypotheses scientifically testable. We offer the prediction that none of these four hypotheses will be falsified. http://www.tbiomed.com/content/2/1/29
It is interesting to note that Dembski and Marks's LCI is a bit more nuanced in its required empirical validation, and/or falsification, than Abel and Trevor's Null Hypothesis is,,,
"LIFE’S CONSERVATION LAW: Why Darwinian Evolution Cannot Create Biological Information": Excerpt: Though not denying Darwinian evolution or even limiting its role in the history of life, the Law of Conservation of Information shows that Darwinian evolution is inherently teleological. Moreover, it shows that this teleology can be measured in precise information-theoretic terms. http://evoinfo.org/publications/lifes-conservation-law/
,,, in that Dembski and Marks's LCI requires us, since it does not falsify gradual Darwinian evolution straight out, to ask if physical reality is either materialistic in its basis, as the atheist holds, or is physical reality theistic in its basis, as the theist holds. It forces us to empirically validate, positively or negatively, the primary question that has been at the heart of this debate since the ancient Greeks. And in that most crucial of questions, 'Is physical reality materialistic or theistic in its basis?', modern science, after all these centuries of heated debate between materialists and Theists, has finally shed light on what that answer is:
Quantum Evidence for a Theistic Universe https://docs.google.com/document/d/1agaJIWjPWHs5vtMx5SkpaMPbantoP471k0lNBUXg0Xo/edit Quantum Entanglement – The Failure Of Local Realism - Materialism - Alain Aspect on Einstein, Bohr, Bell - video http://www.metacafe.com/w/4744145
The falsification for local realism (materialism) was recently greatly strengthened:
Physicists close two loopholes while violating local realism - November 2010 Excerpt: The latest test in quantum mechanics provides even stronger support than before for the view that nature violates local realism and is thus in contradiction with a classical worldview. http://www.physorg.com/news/2010-11-physicists-loopholes-violating-local-realism.html
In fact, Quantum Mechanics has now been extended by Anton Zeilinger, and team, to falsify local realism (reductive materialism) without even using quantum entanglement to do it:
‘Quantum Magic’ Without Any ‘Spooky Action at a Distance’ – June 2011 Excerpt: A team of researchers led by Anton Zeilinger at the University of Vienna and the Institute for Quantum Optics and Quantum Information of the Austrian Academy of Sciences used a system which does not allow for entanglement, and still found results which cannot be interpreted classically. http://www.sciencedaily.com/releases/2011/06/110624111942.htm
i.e. The materialist's cornerstone postulation, which had been that material particles (atoms) are self sustaining 'eternal' entities, is now shown to be false. i.e. A non-local, beyond space and time, cause must be appealed to in order to explain the continued existence of material particles within physical reality. Materialists simply have no rational solution to appeal to whereas Theists have always maintained that Almighty God, who is transcendent of space and time, is upholding/sustaining all of physical reality in its continued existence.
Revelation 4:11 NIV "You are worthy, our Lord and God, to receive glory and honor and power, for you created all things, and by your will they were created and have their being." "The 'First Mover' is necessary for change occurring at each moment." Michael Egnor - Aquinas’ First Way http://www.evolutionnews.org/2009/09/jerry_coyne_and_aquinas_first.html Not Understanding Nothing – A review of A Universe from Nothing – Edward Feser - June 2012 Excerpt: But Krauss simply can’t see the “difference between arguing in favor of an eternally existing creator versus an eternally existing universe without one.” The difference, as the reader of Aristotle or Aquinas knows, is that the universe changes while the unmoved mover does not, or, as the Neoplatonist can tell you, that the universe is made up of parts while its source is absolutely one; or, as Leibniz could tell you, that the universe is contingent and God absolutely necessary. There is thus a principled reason for regarding God rather than the universe as the terminus of explanation. http://www.firstthings.com/article/2012/05/not-understanding-nothing
Although the preceding evidence from quantum mechanics should be more than enough for any reasonable person to see that the primary claim of materialism (self sustaining atoms) is now rendered completely false, the empirical evidence for a theistic universe, that modern science has finally revealed, certainly goes far deeper than the brief overview I presented:
Centrality of Each Individual Observer In The Universe and Christ’s Very Credible Reconciliation Of General Relativity and Quantum Mechanics Excerpt: I find it extremely interesting, and strange, that quantum mechanics tells us that instantaneous quantum wave collapse to its 'uncertain' 3-D state is centered on each individual observer in the universe, whereas, 4-D space-time cosmology (General Relativity) tells us each 3-D point in the universe is central to the expansion of the universe. These findings of modern science are pretty much exactly what we would expect to see if this universe were indeed created, and sustained, from a higher dimension by a omniscient, omnipotent, omnipresent, eternal Being who knows everything that is happening everywhere in the universe at the same time. These findings certainly seem to go to the very heart of the age old question asked of many parents by their children, “How can God hear everybody’s prayers at the same time?”,,, i.e. Why should the expansion of the universe, or the quantum wave collapse of the entire universe, even care that you or I, or anyone else, should exist? Only Theism offers a rational explanation as to why you or I, or anyone else, should have such undeserved significance in such a vast universe: https://docs.google.com/document/d/17SDgYPHPcrl1XX39EXhaQzk7M0zmANKdYIetpZ-WB5Y/edit?hl=en_US Psalm 33:13-15 The LORD looks from heaven; He sees all the sons of men. From the place of His dwelling He looks on all the inhabitants of the earth; He fashions their hearts individually; He considers all their works.
bornagain77
PS: Boltzmann actually simply used W. It is on his tombstone. kairosfocus
F/N 2: Earlier, I pointed out that when one searches in a space or samples it, one faces the issue of sampling frame, with potential for bias. In the search context, if one's sampling frame is a type-F, one may drastically improve the conditional probability of finding the target sub-set of space W, T, given sample frame F, on a search-sample of scope s. But also, if the frame is a type-G instead, then one has reduced the conditional probability of successful search given sample frame G, to zero, as T is not in G. I then raised the issue that searching for a sample frame is a major challenge. I should note on a reasonable estimate of that challenge. W is the population, the set of possible configs here. The possible F's (obviously a frame is non-unique) and G's are obviously sub-sets of W. So, we are looking at the set of possible subsets of W, perhaps less the empty set {} in practical terms, as if one is in fact taking on a search, one will have a frame of some scope. But, for completeness that empty set would be in, and takes in the cases of no-sample. The power set of a given set of n members, of course, has 2^n members. In the case of a set of the possible configs for 500 bits, we are looking at the power set for 2^500 ~ 3.27*10^150. Then, raise 2 to that power: 2^(3.27*10^150). The scope of such a set overwhelmingly, way beyond merely astronomically, dwarfs the original set. To estimate it, observe that log x^n = n* log x. 3.27*10^150 times log 2 ~ 9.85*10^149. That is the LOGARITHM of the number. Going to the actual number, we are talking here of essentially 10 followed by 10^150 zeros, which we could not write out with all the atoms of our observed cosmos, not by a long, long, long shot. Take away 1 for eliminating the empty set, and that is what we are looking at. So, first and foremost, we should not allow toy examples that do not have anywhere near the relevant threshold scope of challenge on complexity, mislead us into thinking that the search for a successful search strategy -- remember that boils down to being a framing of the sampling process -- is an easy task. So, absent special information, the blind search for a good frame will be much harder than the direct blind search for the hot zone T in W. So also, if searching blindly by trial and error on W is utterly unlikely to succeed, searching blindly in the power set less 1: (2^W) - 1, will be vastly more unlikely to succeed. And, since -- by virtue of the applicable circumstances that sharply constrain configs to get them to function in relevant ways -- T is small and isolated in W, by far and away most of the search frames in that set will be type-G not type-F. Consequently, if a framing "magically" transforms the likelihood of search success, the reasonable best explanation for that is that it is because the search framing was intelligently selected on key information. And it is not unreasonable to define a quantity for the impact of that information, on the gap between blind search on W and search on F. Hence the concept and metrics for active information are not unreasonable on the whole, never mind whatever particular defects may be found with specific models and proposed metrics. One last point. In thermodynamics, it is notorious that for small, toy samples, large fluctuations are quite feasible. But, as the number of particles in a thermodynamic system rises to more realistic levels, the fact that he overwhelming bulk of the distribution of possibilites tends to cluster on a peak, utterly dominates behaviour. So, yes, for toy examples, we can easily enough find large fluctuations from the "average" -- more properly expected, outcome. But once we go up to realistic scale, spontaneous, stochastic behaviour will normally tightly cluster on the bulk of the distribution of possibilities. Or, put another way, not all lotteries are winnable, especially the naturally occurring ones. Those that are advertised all around are very carefully designed to be profitable and winnable as the announcement of a big winner will distract attention from the typical expectation: loss. So, to point to the abstract possibility of fluctuations, especially on toy examples is distractive and strawmannish relative to the real challenge: hitting a tiny target zone T in a huge config space W, usually well beyond 2^500 in scope. As we can easily see, on the scope of resources in our solar system, the possible sample size relative to the scope of possibilities is overwhelmingly unfavourable, leading to the problem of a chance based needle in a haystack blind search exercise on steroids. (Remember, mechanical necessity does not generate high contingency, it is chance or choice that do that.) The result of that challenge is obvious all around us: the successful creation of entities that are functional, complex and dependent on specific config or a cluster of similar configs to function is best explained on design by skilled and knowledgeable intelligence, not blind chance and mechanical necessity. The empirical evidence and the associated needle in haystack or monkeys at keyboards challenges are so overwhelmingly in favour of that point that the real reason for the refusal to accept this as even "obvious," is prior commitment to and/or indoctrination in the ideology that blind chance and necessity moved us from molecules to Mozart. KF kairosfocus
Again just because R0bb can muddle his target set doesn’t mean someone else can’t come along, take the muddled set and make simple sense out of it. For example instead of defining ? as the muddled {1, higher than 1}, we would properly define it as {1,2,3,4,5,6}.
As I said in the first sentence you quoted, I'm simply reiterating a point that Dembski used to make often. If I'm wrong about it, then so was Dembski. A probability distribution is mathematically valid iff every probability is between 0 and 1, and the sum of the probabilities is 1. Can you provide definitions for your terms "muddled" and "properly defined"? A distribution is muddled iff __________________________. A distribution is properly defined iff __________________________. Are any of my sample spaces or distributions mathematically invalid?
So this second example goes to my point about R0bb’s first example- that yes, if you get to do whatever you want, no matter how senseless it is, you can seem to violate LCI.
Are my "senseless" examples mathematically valid or not? Do some of them violate the LCI or only "seem" to violate it? If the LCI fails in mathematically valid cases, is it a true law? R0bb
Joe:
Any one of the 6 outcomes can be had on any ONE roll of the dice. Not so with your first example.
Pardon my thick skull, but are you referring to my random walk example? If so, what aspect of the random walk are you describing as a roll of the die? Or are you talking about the die example in my second post at TSZ?
Consistently excluding zero-probability outcomes from Ω would yield bizarre results.
So with the dice example I quoted above does that mean we should also include numbers 7 – infinity?
No, it means that if we always exclude zero-probability outcomes from Ω, then in some cases active information will decrease when a search improves. Do you agree? As for including numbers 7 - infinity (by which I assume you mean all integers greater than 6, none of which is actually infinity), is there any reason not to do so, other than inconvenience?
And of course a coin toss would then have more than two outcomes- in zero G.
Same question as above.
Wow R0bb, thanks. That clears up my misunderstanding. If we just do whatever we want we can violate the LCI.
For the LCI to qualify as a law, it has to hold up to any mathematically valid case that we throw at it. "Doing whatever we want", so long as it's mathematically valid, is how we test purported laws. Are any of my examples mathematically invalid? If so, then can you please show me where? R0bb
Joe:
There aren’t 5 choices, ever. The item only has two choices, three if it can stay put. You can spew your rhetoric all you want it ain’t ever going to change that fact.
I never said there are 5 choices. I know there are 2. This is a two-dimensional random walk, where every transition goes one of two ways. I said so in the model description and showed it in the model diagram. Let's recap. Your disagreement, as stated in #24, is with my value of 1/5 for the log unscaled endogenous info (I'll call it endogenous probability). You say it should be 1/2 because there are only two choices. As I pointed out in #31, endogenous probability is defined as |T|/|Ω|, not |T|/(number of choices). Do you agree that this is how it's defined? You might be of the opinion that Ω is supposed to be defined such that |Ω| = number of choices, and therefore endogenous probability = |T|/|Ω| = |T|/(number of choices). Is that your position? If that is your position, do you believe that Dembski and Marks always define Ω such that |Ω| = number of choices? If that is not your position, why do you think that the endogenous probability is |T|/(number of choices)?
Perhaps you can tell us which one of Dembski & Marks’ examples your example 1 is copying. The point is I say you pulled your example from your _______ and it has nothing to do with what they are saying.
I'm not copying any of their examples. What would be the point of that? But if you think that endogenous probability = |T|/(number of choices), then reading through their examples will correct that misconception. For example, in section 4.1.1 of the Search for a Search paper, the endogenous probability is 1/16, but the number of choices is 13. How do you reconcile that with your objection to my example? Consider the concept of "Brillouin active information", defined in section III.B of this paper. If endogenous probability is always |T|/(number of choices), then Brillouin active information is always zero. Why would Dembski and Marks define a measure that's always zero? Bottom line: We both agree that there are 2 choices. You claim that the endogenous probability must therefore be 1/2. Why? R0bb
Joe, try here: http://htmlhelp.com/reference/html40/entities/symbols.html You'll have to enter the codes manually but they work. Ω Chance Ratcliff
font face="Symbol" doesn't seem to be supported Joe
test 2- W omega symbol Joe
testing- ? from a cut-n-paste from a .doc insert symbol ? Joe
F/N: Wiki, on sampling frame vs. population:
In statistics, a sampling frame is the source material or device from which a sample is drawn.[1] It is a list of all those within a population who can be sampled, and may include individuals, households or institutions . . . . In the most straightforward case, such as when dealing with a batch of material from a production run, or using a census, it is possible to identify and measure every single item in the population and to include any one of them in our sample; this is known as direct element sampling.[1] However, in many other cases this is not possible; either because it is cost-prohibitive (reaching every citizen of a country) or impossible (reaching all humans alive).
In short, a population of possibilities is often sampled, and that sample may come from a defined subset that may or may not bias outcomes. In the case of a config space W [Omega will not print right], we may set up a frame F, that contains a zone of interest, T. If it does so, the odds of a sample of size s hitting T in F will be very different from that of s in W. That is simple to see. It may be harder to see that, say, a warmer/colder set of instructions, is such a framing. But obviously, this is telling whether one is trending right or wrong. That is, hill-climbing reframes a search task in ways that make it much easier to hit T. Now, multiply by three factors:
a: s is constrained by accessible resources, in such a way that a blind, random search on W is maximally unlikely to hit T. b: by suitably reframing to a suitable F, s is now much more likely to hit T. c: But by reframing to G, s is now even more unlikely to hit T than a blind random search on W, as T is excluded from G,
Now, obviously, moving from W to F is significant. In effect F maps a hot zone that drastically enhances the expected outcome of s. But, that implies that picking your F is itself a result of a higher order challenge. For if T is small and isolated in W, if we pick a frame at random, a type-G is far more likely than a type-F. So, the search for a frame is a highly challenging search task itself. Indeed, in the case of interest, comparable to the search for T in W itself. The easiest way to get a type-F is to use accurate information. For instance, those who search for sunken Spanish Treasure fleet ships often spend more time in the Archive of the Indies in Spain, than in the field; that is how significant finding the right frame can be. Where also, that information that gets us to a type-F search rather than the original type-W one. Indeed, the Dembski-Marks model boils down to measuring the typical improvement provided by advantageous framing. This, by in effect converting the jump in estimated probability in moving frame from W to F into an information metric. (Probabilities are related to information, as standard info theory results show.) That, contrary to dismissive remarks, is reasonable. The relevance of all this to the debates over FSCO/I is obvious. When we have a functional object that depends for functionality on the correct arrangement of well-matched parts, this object can be mapped in a field of possibilities W, in zones of interest T. One way to reduce this to information, is to set up a nodes-arcs specification that WLOG can be converted into a structured set of strings. (AutoCad is used for this all the time, and the DWG file size serves as a good rule of thumb metric of the degree of complexity.) Obviously not any config of components will work. Just think about trying to put a car engine back together and getting it to work at random, or turning a random configuration of alphanumeric characters back into a functioning computer program. That is where the concept of islands of function comes from. A simple solar system level threshold for enough complexity to make the isolation of T significant is 500 bits. At that level, the 10^57 atoms of our solar system, across its lifespan of about 10^17 s on the typical timeline, at the fastest rates of chemical reactions would be able to look at maybe the equivalent to a one straw sized sample to a cubical hay bale 1,000 light years thick. That is how the frame would be naturally constrained as to scope. Even if such a bale were superposed on the Galaxy, centred on Earth -- about as thick -- a sample at random would (per sampling theory) be overwlelmingly likely to reflect the bulk of the distribution, straw. That is the issue of FSCO/I, and it is why the most credible causal source for it is design. KF kairosfocus
I was wondering why no one else had responded to R0bb's demonstrations of LCI violations, now I know. :) Joe
Alrighty then, moving on to R0bb's second "demonstration:
He used to make this point often. But two of his new information measures, “endogenous information” and “active information”, depend on the procedure used to individuate the possible outcomes, and are therefore ill-defined according to Dembski’s earlier position. To see how this fact allows arbitrarily high measures of active information, consider how we model the rolling of a six-sided die. We would typically define ? as the set {1, 2, 3, 4, 5, 6}. If the goal is to roll a number higher than one, then our target T is {2, 3, 4, 5, 6}. The amount of active information I+ is log2(P(T) / (|T|/|?|)) = log2((5/6) / (5/6)) = 0 bits. But we could, instead, define ? as {1, higher than 1}. In that case, I+ = log2((5/6) / (1/2)) = .7 bits. What we’re modeling hasn’t changed, but we’ve gained active information by making a different modeling choice. Furthermore, borrowing an example from Dembski, we could distinguish getting a 1 with the die landing on the table from getting a 1 with the die landing on the floor. That is, ? = { 1 on table, 1 on floor, higher than 1 }. Now I+ = log2((5/6) / (1/3)) = 1.3 bits. And we could keep changing how we individuate outcomes until we get as much active information as we desire.
Again just because R0bb can muddle his target set doesn't mean someone else can't come along, take the muddled set and make simple sense out of it. For example instead of defining ? as the muddled {1, higher than 1}, we would properly define it as {1,2,3,4,5,6}. So this second example goes to my point about R0bb's first example- that yes, if you get to do whatever you want, no matter how senseless it is, you can seem to violate LCI. After running around the table after your fisrt example, you were probably drinking champagne after posting your second example. Joe
R0bb:
Consistently excluding zero-probability outcomes from ? would yield bizarre results.
So with the dice example I quoted above does that mean we should also include numbers 7 - infinity? And of course a coin toss would then have more than two outcomes- in zero G. Wow R0bb, thanks. That clears up my misunderstanding. If we just do whatever we want we can violate the LCI. I bet you ran around the table with your hands in the air, cheering for yourself, once you figured that out. Thumbs high, big guy... Joe
From LCI simplified:
To see how this works, let's consider a toy problem. Imagine that your search space consists of only six items, labeled 1 through 6. Let's say your target is item 6 and that you're going to search this space by rolling a fair die once. If it lands on 6, your search is successful; otherwise, it's unsuccessful. So your probability of success is 1/6. Now let's say you want to increase the probability of success to 1/2. You therefore find a machine that flips a fair coin and delivers item 6 to you if it lands heads and delivers some other item in the search space if it land tails. What a great machine, you think. It significantly boosts the probability of obtaining item 6 (from 1/6 to 1/2).
Any one of the 6 outcomes can be had on any ONE roll of the dice. Not so with your first example. Joe
R0bb- Perhaps you can tell us which one of Dembski & Marks' examples your example 1 is copying. The point is I say you pulled your example from your _______ and it has nothing to do with what they are saying. Joe
R0bb- There aren't 5 choices, ever. The item only has two choices, three if it can stay put. You can spew your rhetoric all you want it ain't ever going to change that fact. Joe
Well DiEB, R0bb smooched the pooch with his first example as he didn’t realize that at any point the item moving only has two choices, not five. IOW the “problems” seem to be the anti-IDists, not the LCI
Endogenous information is defined in terms of the cardinality of Ω, not the number of choices available to the alternate search. Have you read the examples and proofs in Dembski's work? R0bb
Joe:
And if there HAS to be a shift then there are only 2 positions for the final state, which means (1/2)(1/2)
Thanks for bringing this up as it deals with a core question:  How do we define Ω?  You say that instead of defining Ω with 5 outcomes, we should define it to include only the 2 outcomes that are possible, i.e. the 2 outcomes on which the alternate search confers a non-zero probability. The problem is that Dembski and Marks don't define Ω this way. In most of their examples, active information is the result of restricting the alternate search to a subset of Ω.  If they were to follow your reasoning, they would define Ω to include only the outcomes that are accessible to the alternate search, and the resulting active information would be zero.  So by your reasoning, their examples of active information don't really have any active information. Consistently excluding zero-probability outcomes from Ω would yield bizarre results.  Consider a search that confers a probability of almost 1 on the target and almost 0 on all other outcomes.  This extreme bias toward the target constitutes a lot of active information.  Now suppose we improve the search slightly so that the probability of hitting the target is actually 1, and all other outcomes have a probability of 0.  We duly redefine Ω to include only the target, and as a consequence, the baseline search also hits the target with a probability of 1.  The active information is now 0. So a slight improvement in the search resulted in a drastic reduction in active information. Furthermore, Dembski and Marks do not exclude zero-probability outcomes from Ω in their proofs of their CoI theorems.  If they did, the proofs would far more complicated as they would have to consider multiple definitions of Ω. It would be great if we could have a canonical rule that Ω is always exactly the set of outcomes on which the alternate search confers non-zero probability.  Then there would be no question as to how Ω should be defined.  But as shown above, that's not how Dembski and Marks do it, and with good reason.  So the choice of how to define Ω is up to us when we model a process, which is the subject of my second post at TSZ. R0bb
Well DiEB, R0bb smooched the pooch with his first example as he didn't realize that at any point the item moving only has two choices, not five. IOW the "problems" seem to be the anti-IDists, not the LCI Joe
OK R0bb is awake and posting over on TSZ. Hopefully he makes it over here... Joe
Oops- forgot the / in my equations: (1/2)/(1/2) (comment 24) Joe
Can anyone point to an intelligent being that, without prior information, can create something that finds a needle faster than random search?
Yes, it's called a metal detector and humans use them on a daily basis to find those proverbial needles in haystacks Joe
And if there HAS to be a shift then there are only 2 positions for the final state, which means (1/2)(1/2) Joe
R0bb- If you are starting on n-1, as your example states, then there only 3 final states in which it can land, not 5. Actually it doesn't matter where you start, there will always only be 3 final states in can be after one move. Therefor (1/3)(1/2) NOT (1/5)(1/2) Anything else I can help you with? Joe
Semi OT:
Learning from Bacteria about Information Processing - video Excerpt: I will show illuminating movies of swarming intelligence of live bacteria in which they solve optimization problems for collective decision making that are beyond what we, human beings, can solve with our most powerful computers. I will discuss the special nature of bacteria computational principles in comparison to our Turing Algorithm computational principles, http://www.youtube.com/watch?v=yJpi8SnFXHs
bornagain77
Robb, I noticed a glaring omission in perusal of your 3 posts. You did not cite any actual example of Neo-Darwinian processes producing any functional information. Nor did I see you provide any example of exactly where in the physical universe, besides earth, LCI would not hold as I had asked you previously. Seeing as I am not trained in mathematics, am I suppose to just take your word that it can be done by blind/dumb material processes without a actual example from empirics? The reason I ask you specifically for actual evidence is because neo-Darwinists have a notorious history of claiming that they have overwhelming evidence for evolution, yet when one checks carefully for actual evidence the claims always come up short. Perhaps Robb you can be the first to remedy this shameful history of deception on neo-Darwinists part and produce an actual physical example before you proceed as if Darwinism has any empirical proof for its validity??? Notes to that effect: In spite of the fact of finding molecular motors, and highly sophisticated systems, permeating the simplest of bacterial life, there are no detailed Darwinian accounts for the evolution of even one such motor or system.
"There are no detailed Darwinian accounts for the evolution of any fundamental biochemical or cellular system only a variety of wishful speculations. It is remarkable that Darwinism is accepted as a satisfactory explanation of such a vast subject." James Shapiro - Molecular Biologist
The following expert doesn't even hide his very unscientific preconceived philosophical bias against intelligent design,,,
‘We should reject, as a matter of principle, the substitution of intelligent design for the dialogue of chance and necessity,,,
Yet at the same time the same expert readily admits that neo-Darwinism has ZERO evidence for the chance and necessity of material processes producing any cellular system whatsoever,,,
,,,we must concede that there are presently no detailed Darwinian accounts of the evolution of any biochemical or cellular system, only a variety of wishful speculations.’ Franklin M. Harold,* 2001. The way of the cell: molecules, organisms and the order of life, Oxford University Press, New York, p. 205. *Professor Emeritus of Biochemistry, Colorado State University, USA Michael Behe - No Scientific Literature For Evolution of Any Irreducibly Complex Molecular Machines http://www.metacafe.com/watch/5302950/ “The response I have received from repeating Behe's claim about the evolutionary literature, which simply brings out the point being made implicitly by many others, such as Chris Dutton and so on, is that I obviously have not read the right books. There are, I am sure, evolutionists who have described how the transitions in question could have occurred.” And he continues, “When I ask in which books I can find these discussions, however, I either get no answer or else some titles that, upon examination, do not, in fact, contain the promised accounts. That such accounts exist seems to be something that is widely known, but I have yet to encounter anyone who knows where they exist.” David Ray Griffin - retired professor of philosophy of religion and theology
As well Robb, speaking of actual empirical evidence, do you believe that the very surprising recent findings of 'non-local' (unable to be reduced to within space and time causes) quantum information/entanglement in molecular biology finally offers somewhat tangible support for the theist's contention for a soul or not?
Does Quantum Biology Support A Quantum Soul? – Stuart Hameroff - video (notes in description) http://vimeo.com/29895068 Falsification Of Neo-Darwinism by Quantum Entanglement/Information https://docs.google.com/document/d/1p8AQgqFqiRQwyaF8t1_CKTPQ9duN8FHU9-pV4oBDOVs/edit?hl=en_US
bornagain77
Rob, your example of Bertrand’s Box highlights another problem I have with W. Dembski's and R. Marks's papers: without discussing whether it is suitable they take the arithmetic mean to get the average of the active information for various experiments. Your example shows that the active information of the example is 1 (as the probability to find a gold coin is 1/2) - and that is what the average should be, while the active information of the three equally probable partial experiments is infinity, 0 and 1. The arithmetic mean doesn't make much sense in this context. DiEb
A few random observations wrt the article:
So our attempt to increase the probability of finding item 6 by locating a more effective search for that item has actually backfired, making it in the end even more improbable that we'll find item 6 ... Conservation of information calculates the information cost of this performance increase and shows how it must be counterbalanced by a loss in search performance elsewhere (specifically, by needing to search for the information that boosts search performance) so that global performance in locating the target is not improved and may in fact diminish. ... raising the probability of success of a search does nothing to make attaining the target easier, and may in fact make it more difficult ... If we therefore start with a search having probability of success p and then raise it to q, the actual probability of finding the target is ... less than or equal to p
These bolded phrases lend themselves to misunderstanding. We could interpret them to say that adding a higher level search to the mix can decrease the overall probability of finding the original target, but such is not the case in any of Dembski's CoI theorems or examples. When Dembski says that the probability of finding the target may decrease, he seems to actually mean that finding the higher-level target AND finding the lower-level may be less than the probability of simply finding the lower-level target directly. But that clarification of the statement renders it much less impressive. We don't need the LCI to tell us that P(X&Y) ≤ P(Y) (where P(Y) is the probability of finding the target directly, and also is the unconditional probability of finding the target via a two-level search, the two probabilities being equal in all of Dembski's examples). This is simply of truism of probability theory.
Indeed, that is the defining property of intelligence, its ability to create information, especially information that finds needles in haystacks.
Where is the evidence that intelligence can create information that finds needles in haystacks? Can anyone point to an intelligent being that, without prior information, can create something that finds a needle faster than random search? The LCI supposedly shows, using math that makes no exceptions for intelligence, that this is impossible. Can designers do mathematically impossible things?
The whole magic of evolution is that it's supposed to explain subsequent complexity in terms of prior simplicity, but conservation of information says that there never was a prior state of primordial simplicity
Earlier in the article, Dembski uses complexity to mean improbability, which is the kind of complexity that the LCI deals with. But here he is using complexity in the sense of structural complexity, a property on which the LCI is silent. If Dembski wants to assert that structural complexity is conserved, he needs to quantify this type of complexity and somehow link it to the LCI.
Instead, we see sharply disconnected islands of invention inaccessible to one another by mere gradual tinkering.
I don't know what he means by "mere gradual tinkering", but obviously inventors do more than randomly modify stuff. Their searches are assisted by a lot of information about how reality works. They didn't create this information -- it has been gleaned from observations over many centuries. Has Dembski included this information in his accounting? We don't know, because he hasn't shown us the ledger, or even mentioned what items would be involved in such an accounting. Or to put it another way, he hasn't shown even a rough model of the search space topology, including the connections that are forged by pre-existing knowledge. Inventors, of course, differ from rocks, hurricanes, and cows in their ability to employ this information. What evidence is there that intelligence, as the notion is typically understood, entails the ability to "create information" as opposed to the ability to effectively use existing information? R0bb
But yes I would love to see your demonstration of the 3 points you claim.
Okay -- here, here, and here. R0bb
Dear Dr. Dembski - as evolutionnews doesn't allow comments on this article, I hope that you take the opportunity to discuss your insights on this your former blog! BTW: I can't help to notice that now I'm being mentioned twice in the article "The Search for a Search"! May I expect a bottle of whisky (or some other token of your appreciation) for my contributions to the idea of active information? DiEb
Semi Off Topic:
Information Storage in DNA by Wyss Institute https://vimeo.com/47615970 Harvard cracks DNA storage, crams 700 terabytes of data into a single gram - Sebastian Anthony on August 17, 2012 Excerpt: A bioengineer and geneticist at Harvard’s Wyss Institute have successfully stored 5.5 petabits of data — around 700 terabytes — in a single gram of DNA, smashing the previous DNA data density record by a thousand times.,,, Just think about it for a moment: One gram of DNA can store 700 terabytes of data. That’s 14,000 50-gigabyte Blu-ray discs… in a droplet of DNA that would fit on the tip of your pinky. To store the same kind of data on hard drives — the densest storage medium in use today — you’d need 233 3TB drives, weighing a total of 151 kilos. In Church and Kosuri’s case, they have successfully stored around 700 kilobytes of data in DNA — Church’s latest book, in fact — and proceeded to make 70 billion copies (which they claim, jokingly, makes it the best-selling book of all time!) totaling 44 petabytes of data stored. http://www.extremetech.com/extreme/134672-harvard-cracks-dna-storage-crams-700-terabytes-of-data-into-a-single-gram DNA Stores Data More Efficiently than Anything We've Created Casey Luskin August 29, 2012 Excerpt: Nothing made by humans can approach these kind of specs. Who would have thought that DNA can store data more efficiently than anything we've created. But DNA wasn't designed -- right? http://www.evolutionnews.org/2012/08/who_would_have_063701.html
bornagain77
Robb you make a statement here that I find peculiar:
1) Contrary to Dembski’s claim, the LCI (Law Of Conservation Of Information) is not universal. Counterexamples are easy to find.
Now I'm trying to think of just where in the universe the law of conservation of information, especially as it pertains to Darwinian evolution, would not possibly hold: Would it be in deep inter-stellar space where it is just a few degrees above absolute zero and devoid of mass that you believe the law does not hold in the universe??, or is it in the interior of stars, pulsars, quasars or blackholes, that you believe it does not hold? Please tell me, Robb, from the following inventory where it does not hold:
Table 2.1 Inventory of All the Stuff That Makes Up the Universe (Visible vs. Invisible) Dark Energy 72.1% Exotic Dark Matter 23.3% Ordinary Dark Matter 4.35% Ordinary Bright Matter (Stars) 0.27% Planets 0.0001% Invisible portion - Universe 99.73% Visible portion - Universe .27% of note: The preceding 'inventory' of the universe is updated to the second and third releases of the Wilkinson Microwave Anisotropy Probe's (WMAP) results in 2006 & 2008; (Why The Universe Is The Way It Is; Hugh Ross; pg. 37)
Even granting the most favorable of circumstances, right here on earth, for Darwinian processes to conduct successful searches for functional information is certainly a extremely rare event in the universe, and is certainly no easy task for the 'universe' to accomplish on its own before a search for functional information can even begin to start:
Does the Probability for ETI = 1? Excerpt; On the Reasons To Believe website we document that the probability a randomly selected planet would possess all the characteristics intelligent life requires is less than 10^-304. A recent update that will be published with my next book, Hidden Purposes: Why the Universe Is the Way It Is, puts that probability at 10^-1054. Linked from Appendix C from Dr. Ross's book, 'Why the Universe Is the Way It Is'; Probability for occurrence of all 816 parameters ? 10^-1333 dependency factors estimate ? 10^324 longevity requirements estimate ? 10^45 Probability for occurrence of all 816 parameters ? 10^-1054 Maximum possible number of life support bodies in observable universe ? 10^22 Thus, less than 1 chance in 10^1032 exists that even one such life-support body would occur anywhere in the universe without invoking divine miracles. http://www.reasons.org/files/compendium/compendium_part3.pdf Hugh Ross - Evidence For Intelligent Design Is Everywhere (10^-1054) - video http://www.metacafe.com/watch/4347236
Moreover, even granting the most favorable of circumstances right here on earth does not circumvent the laws grip preventing Darwinian processes from ever producing functional information:
HISTORY OF EVOLUTIONARY THEORY - WISTAR DESTROYS EVOLUTION Excerpt: A number of mathematicians, familiar with the biological problems, spoke at that 1966 Wistar Institute,, For example, Murray Eden showed that it would be impossible for even a single ordered pair of genes to be produced by DNA mutations in the bacteria, E. coli,—with 5 billion years in which to produce it! His estimate was based on 5 trillion tons of the bacteria covering the planet to a depth of nearly an inch during that 5 billion years. He then explained that the genes of E. coli contain over a trillion (10^12) bits of data. That is the number 10 followed by 12 zeros. *Eden then showed the mathematical impossibility of protein forming by chance. http://www.pathlights.com/ce_encyclopedia/Encyclopedia/20hist12.htm
I readily admit Robb, that I'm not a mathematician and that it is hard for me to follow some of the high level debates that you have engaged in trying to undermine the credibility of the LCI as laid out by Dembski, Marks, Ewert etc.., but from a practical, empirical, point of view, I'm left wondering just how do you plan to show that LCI is not universal to Darwinian processes as far as physical reality itself is concerned. related note: It is also extremely interesting to note, the principle of Genetic Entropy, a principle which stands in direct opposition of the primary claim of neo-Darwinian evolution, and is complete agreement with the second law of thermodynamics and the Law of Conservation of information, lends itself quite well to mathematical analysis by computer simulation:
Using Computer Simulation to Understand Mutation Accumulation Dynamics and Genetic Load: Excerpt: We apply a biologically realistic forward-time population genetics program to study human mutation accumulation under a wide-range of circumstances.,, Our numerical simulations consistently show that deleterious mutations accumulate linearly across a large portion of the relevant parameter space. http://bioinformatics.cau.edu.cn/lecture/chinaproof.pdf MENDEL’S ACCOUNTANT: J. SANFORD†, J. BAUMGARDNER‡, W. BREWER§, P. GIBSON¶, AND W. REMINE http://mendelsaccount.sourceforge.net Genetic Entropy - Dr. John Sanford - Evolution vs. Reality - video (Notes in description) http://vimeo.com/35088933 Are You Looking for the Simplest and Clearest Argument for Intelligent Design? - Granville Sewell (2nd Law) - video http://www.evolutionnews.org/2012/02/looking_for_the056711.html Physicist Rob Sheldon offers some thoughts on Sal Cordova vs. Granville Sewell on 2nd Law Thermo - July 2012 Excerpt: This is where Granville derives the potency of his argument, since a living organism certainly shows unusual permutations of the atoms, and thus has stat mech entropy that via Boltzmann, must obey the 2nd law. If life violates this, then it must not be lawfully possible for evolution to happen (without an input of work or information.) https://uncommondesc.wpengine.com/intelligent-design/physicist-rob-sheldon-offers-some-thoughts-on-sal-cordova-vs-granville-sewell-on-2nd-law-thermo/ Evolution Vs. Thermodynamics - Open System Refutation - Thomas Kindell - video http://www.metacafe.com/watch/4143014
bornagain77
Oh wait, there is a mathematical model for evolutionism: mother nature + father time + magical mystery mutations = the diversity of life Joe
B) Given #1 above, claims that the LCI applies to Darwinian evolution must be justified, which would involve mathematically modeling Darwinian evolution. This is something that no IDist has done, AFAIK.
Umm there isn't any mathematically modeling Darwinian evolution. You cannot mathematically model imagination and magical mystery mutations. And that is the problem with evolutionism and materialism-> no mathematical connection to the real world. But yes I would love to see your demonstration of the 3 points you claim. Joe
I know I'll regret posting this when I don't have time to carry on a conversation, but here goes. Some things to note about the LCI, independent of any ID claims: 1) Contrary to Dembski's claim, the LCI is not universal. Counterexamples are easy to find. 2) Active information is sensitive to the definitions of the lower- and higher-level search spaces, which are modeling choices. Any observed process can be modeled such that it violates the LCI, and any observed process can be modeled such that the LCI holds. 3) Even with models for which the LCI holds, there is still no guarantee that active information won't be generated by chance. In fact, it's easy to come up with a scenario in which we expect this to occur. Any of the above can be conclusively demonstrated, albeit not easily in a blog comment. If anyone disputes any of the above facts, let me know, and I'll post demonstrations at TSZ when I get the time. Some things to note about the LCI with regards to ID: A) Given conditions under which the LCI holds, intelligent designers are no less constrained by the LCI than nature is, since the LCI is strictly mathematical. So the LCI can't be employed to distinguish an intelligent cause from a natural cause. B) Given #1 above, claims that the LCI applies to Darwinian evolution must be justified, which would involve mathematically modeling Darwinian evolution. This is something that no IDist has done, AFAIK. C) The Principle of Indifference (also called the Principle of Insufficient Reason) is a heuristic for assigning prior epistemic probabilities in the face of ignorance. Assuming, without updating the prior, that the prior reflects reality is literally an argument from ignorance. R0bb
As well, to add further plausiblity to Christ's unique claim, the following study shows that the SAT (Scholastic Aptitude Test) scores for students showed a steady decline, for seventeen years from the top spot or near the top spot in the world, after the removal of prayer from the public classroom by the Supreme Court, not by public decree, in 1963. Whereas the SAT scores for private Christian schools have consistently remained at the top, or near the top, spot in the world:
The Real Reason American Education Has Slipped – David Barton – video http://www.metacafe.com/watch/4318930 AMERICA: To Pray Or Not To Pray - David Barton - graphs corrected for population growth http://www.whatyouknowmightnotbeso.com/graphs.html
Of related interest, as to defending the integrity of David Barton's scholarship, David Barton defends, from several recent attacks by seemingly 'friendly scholars', his scholarship for a book he wrote suggesting Thomas Jefferson may have held beliefs that were far more friendly to Christianity than what many recent 'revisionist' historians have been portraying his beliefs to be to the general public:
David Barton responds to critics during a interview with Glenn Beck on GBTV - Aug, 2012 http://t.co/FPk503pp
Moreover, the rise of America to preeminence in science was accompanied by a 'Christian revival':
Bruce Charlton's Miscellany - October 2011 Excerpt: I had discovered that over the same period of the twentieth century that the US had risen to scientific eminence it had undergone a significant Christian revival. ,,,The point I put to (Richard) Dawkins was that the USA was simultaneously by-far the most dominant scientific nation in the world (I knew this from various scientometic studies I was doing at the time) and by-far the most religious (Christian) nation in the world. How, I asked, could this be - if Christianity was culturally inimical to science? http://charltonteaching.blogspot.com/2011/10/meeting-richard-dawkins-and-his-wife.html
And while the preceding certainly should raise a few eyebrows as to establishing the plausibility of Christ's unique claims for being 'the truth' ( being the existing information) that has enabled 'successful searches' for scientific truth by humans, I would like to point out, once again, that the resurrection of Christ finds itself, strangely, in the center of the number 1 problem in science today, The unification of General Relativity and Quantum Mechanics into a 'theory of everything'. Amazingly, a very credible reconciliation to the number 1 problem in physics and mathematics today, of a 'unification of General Relativity and Quantum Mechanics, is found in the resurrection event of Christ, though Christ's resurrection should be nowhere near offering such a credible solution in the atheistic mindset:
General Relativity, Quantum Mechanics, Entropy, and The Shroud Of Turin - updated video (notes in video description) http://vimeo.com/34084462
i.e. when one allows God into math, as Godel indicated must ultimately be done to keep math from being 'incomplete', then there actually exists a very credible, empirically backed, reconciliation between Quantum Mechanics and General Relativity into a 'Theory of Everything'!,,, As a footnote; Godel, who proved you cannot have a mathematical ‘Theory of Everything’, without allowing God to bring 'completeness' to the 'Theory of Everything', also had this to say:
The God of the Mathematicians – Goldman Excerpt: As Gödel told Hao Wang, “Einstein’s religion [was] more abstract, like Spinoza and Indian philosophy. Spinoza’s god is less than a person; mine is more than a person; because God can play the role of a person.” – Kurt Gödel – (Gödel is considered one of the greatest logicians who ever existed) http://www.firstthings.com/article/2010/07/the-god-of-the-mathematicians
further notes:
Colossians 1:15-20 The Son is the image of the invisible God, the firstborn over all creation. For in him all things were created: things in heaven and on earth, visible and invisible, whether thrones or powers or rulers or authorities; all things have been created through him and for him. He is before all things, and in him all things hold together. And he is the head of the body, the church; he is the beginning and the firstborn from among the dead, so that in everything he might have the supremacy. For God was pleased to have all his fullness dwell in him, and through him to reconcile to himself all things, whether things on earth or things in heaven, by making peace through his blood, shed on the cross.
as well:
The End Of Christianity - Finding a Good God in an Evil World - Pg.31 William Dembski PhD. Mathematics Excerpt: "In mathematics there are two ways to go to infinity. One is to grow large without measure. The other is to form a fraction in which the denominator goes to zero. The Cross is a path of humility in which the infinite God becomes finite and then contracts to zero, only to resurrect and thereby unite a finite humanity within a newfound infinity." http://www.designinference.com/documents/2009.05.end_of_xty.pdf Philippians 2: 5-11 Let this mind be in you, which was also in Christ Jesus: Who, being in the form of God, thought it not robbery to be equal with God: But made himself of no reputation, and took upon him the form of a servant, and was made in the likeness of men: And being found in fashion as a man, he humbled himself, and became obedient unto death, even the death of the cross. Wherefore God also hath highly exalted him, and given him a name which is above every name: That at the name of Jesus every knee should bow, of things in heaven, and things in earth, and things under the earth; And that every tongue should confess that Jesus Christ is Lord, to the glory of God the Father.
All in all, though certainly not proof in any rigorous sense as the 'conservation of information' is, this cumulative evidence for Christ being the source of all truth, as he claimed he was, certainly should be enough plausibility to give severe pause to anyone who has written Christ off as being fantasy on par with unicorns and the Easter Bunny. At least anyone who is reasonable enough not to be given over to denying God/Christ at all costs:
'Other than Christ, no other religious leader was foretold a thousand years before he arrived, nor was anything said about where he would be born, why he would come, how he would live, and when he would die. No other religious leader claimed to be God, or performed miracles, or rose from the dead. No other religious leader grounded his doctrine in historical facts. No other religious leader declared his person to be even more important than his teachings.' - StephenB - UD Blogger
Music
Empty (Empty Cross Empty Tomb) with Dan Haseltine Matt Hammitt (Music Inspired by The Story) http://www.godtube.com/watch/?v=F22MCCNU
bornagain77
Of somewhat related note. As to the limits that the 'conservation of information' places on 'finite' human intelligence conducting successful searches for 'true knowledge', I would like to point out this following limit that Godel and Turing found.
Alan Turing & Kurt Godel - Incompleteness Theorem, Halting Problem, and Human Intuition - video (notes in video description) http://www.metacafe.com/watch/8516356/
This 'human intuition' that Godel, a Christian Theist and close friend of Einstein, appealed to to overcome the 'materialistic' limits that he, and Turing, had found to finite human intelligence, and to material computers, respectfully, increasing human knowledge, is, in my opinion, far too vague. Exactly where is this 'true knowledge' coming from that makes our 'human intuition' searches truly successful? Or as Dr. Dembski has put it,,,
Searches achieve success not by creating information but by taking advantage of existing information.
i.e. exactly where is this 'existing information' coming from that makes humans search for true knowledge successful? Appealing to 'human intuition' as Godel does, in my opinion, does little to clarify exactly where the 'true knowledge' is coming from that makes any particular search for truth, by humans, successful. In fact, given the overwhelming propensity of Darwinists to choose any solution that they can possibly imagine over the real solution staring them in the face, i.e. namely I.D., I would have to say it is not of minor importance to identify the actual source for 'true knowledge', separating it from human imagination, as best that we can as finite humans, instead of just leaving it 'up in the air' to something as vague as 'human intuition'. As to this endeavor at clarification, I believe a fairly strong case of plausibiliy can now be made to solidify Christ's claim as to being the ultimate 'source of truth':
John 14:6 Jesus answered, "I am the way and the truth and the life. No one comes to the Father except through me. John 15:4 Remain in me, and I will remain in you. No branch can bear fruit by itself; it must remain in the vine. Neither can you bear fruit unless you remain in me. John 8:31-32 To the Jews who had believed him, Jesus said, "If you hold to my teaching, you are really my disciples. Then you will know the truth, and the truth will set you free."
Sir Isaac Newton, whom many consider to be the greatest scientist of all time, was certainly not bashful as to giving credit to God for his successful search for truth:
“I have a fundamental belief in the Bible as the Word of God, written by men who were inspired. I study the Bible daily…. All my discoveries have been made in an answer to prayer.” — Sir Isaac Newton (1642-1727)
As well, Sir Isaac Newton's book 'Principia' is considered by many the most important scientific work of all time that had the greatest impact on transforming Western culture, and bringing modern science to a sustainable level of maturity. The book contains a General Scholium (General Interpretation) that reads in part,,,
This most beautiful system of the sun, planets, and comets, could only proceed from the counsel and dominion of an intelligent and powerful Being. And if the fixed stars are the centres of other like systems, these, being formed by the like wise counsel, must be all subject to the dominion of One; especially since the light of the fixed stars is of the same nature with the light of the sun, and from every system light passes into all the other systems: and lest the systems of the fixed stars should, by their gravity, fall on each other mutually, he hath placed those systems at immense distances one from another. This Being governs all things, not as the soul of the world, but as Lord over all; and on account of his dominion he is wont to be called Lord God pantokrator, or Universal Ruler;,,, The Supreme God is a Being eternal, infinite, absolutely perfect;,,, from his true dominion it follows that the true God is a living, intelligent, and powerful Being; and, from his other perfections, that he is supreme, or most perfect. He is eternal and infinite, omnipotent and omniscient; that is, his duration reaches from eternity to eternity; his presence from infinity to infinity; he governs all things, and knows all things that are or can be done. He is not eternity or infinity, but eternal and infinite; he is not duration or space, but he endures and is present. He endures for ever, and is every where present: Sir Isaac Newton - Quoted from what many consider the greatest science masterpiece of all time, his book "Principia"
Sir Isaac Newton was certainly not the only founder of modern science who was not bashful as to giving glory to God. In the following article are several eye opening quotes:
Founders of Modern Science Who Believe in GOD – Tihomir Dimitrov http://www.scigod.com/index.php/sgj/article/viewFile/18/18
Many may object that amazing 'leaps of intuition', in modern science, have been accomplished by people, such as Einstein, who had no confessing faith in Jesus Christ. And indeed Einstein had brilliant thought experiments, such as this one 'riding a beam of light':
Albert Einstein - Special Relativity - Insight Into Eternity - 'thought experiment' video http://www.metacafe.com/w/6545941/
But if we 'peak under the hood', to see what enabled Einstein to make this 'leap of intuition' of 'riding a light beam' successful, we find that a lot of preparatory work preceded Einstein's 'leap of intuition' on special relativity:
It is easily proven that Albert Einstein did not originate the special theory of relativity in its entirety, or even in its majority.1 The historic record is readily available. Ludwig Gustav Lange,2 Woldemar Voigt,3 George Francis FitzGerald,4 Joseph Larmor,5 Hendrik Antoon Lorentz,6 Jules Henri Poincaré,7 Paul Drude,8 Paul Langevin,9 and many others, slowly developed the theory, step by step, http://home.comcast.net/~xtxinc/prioritymyth.htm
Moreover, to be more particular to Christianity, if we look at Einstein's central masterpiece, General Relativity, we find that his central masterpiece was made possible by the work of a devout Christian mathematician, Bernhard Riemann:
The Mathematics Of Higher Dimensionality - Gauss & Riemann http://www.metacafe.com/watch/6199520/
This is not to take any credit away from the staggering genius of Einstein in effectively channeling his imagination, in his precise thought experiments, which were, without question, penetratingly effective, but just say, in all fairness, that in doing so Einstein had to stand on the shoulders of giants who themselves had stood on the shoulders of Christ in order to make his 'leap of intuition' fruitful scientifically! Moreover, to add to the plausibility of Christ's unique claim for being the source for truth, if we look for ‘leaps of intuition’ (successful searches for truth) throughout history in modern science, we notice a very strange pattern in regards to scientific discovery in Judeo-Christian cultures. A very strong piece of suggestive evidence, which persuasively hints at a unique relationship that man has with ‘The Word’ of John 1:1, is found in these following articles which point out the fact that ‘coincidental scientific discoveries’ are far more prevalent than what should be expected from a materialistic perspective:
List of multiple discoveries Excerpt: Historians and sociologists have remarked on the occurrence, in science, of “multiple independent discovery”. Robert K. Merton defined such “multiples” as instances in which similar discoveries are made by scientists working independently of each other.,,, Multiple independent discovery, however, is not limited to only a few historic instances involving giants of scientific research. Merton believed that it is multiple discoveries, rather than unique ones, that represent the common pattern in science. http://en.wikipedia.org/wiki/List_of_multiple_discoveries In the Air – Who says big ideas are rare? by Malcolm Gladwell Excerpt: This phenomenon of simultaneous discovery—what science historians call “multiples”—turns out to be extremely common. One of the first comprehensive lists of multiples was put together by William Ogburn and Dorothy Thomas, in 1922, and they found a hundred and forty-eight major scientific discoveries that fit the multiple pattern. Newton and Leibniz both discovered calculus. Charles Darwin and Alfred Russel Wallace both discovered evolution. Three mathematicians “invented” decimal fractions. Oxygen was discovered by Joseph Priestley, in Wiltshire, in 1774, and by Carl Wilhelm Scheele, in Uppsala, a year earlier. Color photography was invented at the same time by Charles Cros and by Louis Ducos du Hauron, in France. Logarithms were invented by John Napier and Henry Briggs in Britain, and by Joost Bürgi in Switzerland. ,,, For Ogburn and Thomas, the sheer number of multiples could mean only one thing: scientific discoveries must, in some sense, be inevitable. http://www.newyorker.com/reporting/2008/05/12/080512fa_fact_gladwell/?currentPage=all
bornagain77
Jon, Yes, they have many misconceptions and still no evidence that blind and undirected processes can do it. Joe
Joe I've looked at the Skeptical Zone thread, and the previous one linked to there (replying to NFL). You're right, they are critical of Dembski. Whether they constitute "refutation" is quite another matter. One small point that occurs to me. The "Methinks there is a weasel" example is still defended, though rightly criticised as targeted on a particular result. Presumably it would be more convincing to mutate a sentence through a whole sequence of random, but readable, permutations, but unfortunately that doesn't actually work. Yet Alan Fox says "There is no reason to suppose functionality is not common in the set of all possible protein sequences", in other words what doesn't work in simple English sentences is easy-peasy in real cells. There's significant dispute over the facts of the case, of course (there are some reasons for supposing functionality not to be common), but you wouldn't intuitively think that cells are easier to evolve than sentences, would you? Jon Garvey
Hey Jon, TSZ has some "critics" LoL!:
Later, he cheerfully admits that the evolution of nylonase represents an increase of information:
Nylon, for instance, is a synthetic product invented by humans in 1935, and thus was absent from bacteria for most of their history. And yet, bacteria have evolved the ability to digest nylon by developing the enzyme nylonase. Yes, these bacteria are gaining new information, but they are gaining it from their environments, environments that, presumably, need not be subject to intelligent guidance. No experimenter, applying artificial selection, for instance, set out to produce nylonase.
So much for his fellow IDers who insist that mutation invariably causes a loss of information. Dembski admits that there is a gain of information, but argues that it falls below his 500-bit CSI threshold and therefore does not indicate design.
1- There isn't any evidence that blind and undirected processes produced nylonase 2- It could very well be that nylonase arose via built-in responses to environmental cues. These opponents have absolutely no clue.... Joe
An excellent article, and hard to imagine it's refutable (certainly nobody seems to have tried in any half-convincing way in response to Dembski's previous statements of it). Presumably this would also exclude the validity of all emergence theories explaining life's complexities. They may indeed be shown to explain things, but would require that the capabilities of the systens were implicit in their components, not wholly new. I wonder if Dembski has made any specific references to this, as it is one of the priciple theoretical alternatives to Darwinism and ID. Jon Garvey
dg4 @3 "Perhaps the multiverse exists as an array of possibilities in the mind of God." Isn't that just an extravagant way of describing all design? You could describe this post as choosing between a Universe without my post and a similar Universe with it. With God it has slightly more relavance as he creates the whole show - he thinks of this Universe in preference to all the others he could conceive. But if he is omniscient why would he need to conceive, and then reject, all the myriads of Universes that wouldn't work out? Even we only restrict our efforts to a relatively few possibilities, and unlike God we are not pure Wisdom, which excludes the need for either real of virtual multiverses.. Jon Garvey
William Dembski, needful information for reading. thank you. sergio sergiomendes
Bill, good to see your post and I look forward to checking out your article. This is an important area of discussion. Eric Anderson
The idea that this life-supporting universe, with its necessary fine tuning, is the result of a search is an intriguing one. The inventor considers many possibilities, and even builds unsuccessful prototypes, prior to realizing his design. Perhaps the multiverse exists as an array of possibilities in the mind of God. dgw
Dr. Dembski, Here is 'conservation of information' explained so that even complete IDiots (pun intended) like me can understand it: :)
Unevolved Arthropods Found in Amber - August 28, 2012 Excerpt: 230 million years old, 100 million years older than the previous record holders,,, The ancient gall mites are surprisingly similar to ones seen today.,,, “You would think that by going back to the Triassic you’d find a transitional form of gall mite, but no,” Grimaldi said. “Even 230 million years ago, all of the distinguishing features of this family were there—a long, segmented body; only two pairs of legs instead of the usual four found in mites; unique feather claws, and mouthparts.” http://crev.info/2012/08/unevolved-arthropods-in-amber/
bornagain77
Thank you William. Reading it right now. julianbre

Leave a Reply