Uncommon Descent Serving The Intelligent Design Community

“Conservation of Information Made Simple” at ENV

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Evolution News & Views just posted a long article I wrote on conservation of information.

EXCERPT: “In this article, I’m going to follow the example of these books, laying out as simply and clearly as I can what conservation of information is and why it poses a challenge to conventional evolutionary thinking. I’ll break this concept down so that it seems natural and straightforward. Right now, it’s too easy for critics of intelligent design to say, ‘Oh, that conservation of information stuff is just mumbo-jumbo. It’s part of the ID agenda to make a gullible public think there’s some science backing ID when it’s really all smoke and mirrors.’ Conservation of information is not a difficult concept and once it is understood, it becomes clear that evolutionary processes cannot create the information required to power biological evolution.” MORE

TEASER: The article quotes some interesting email correspondence that I had with Richard Dawkins and with Simon Conway Morris, now going back about a decade, but still highly relevant.

Comments
Comments 24-27
That's back in the beginning when you were arguing that |Ω| should be 2 instead of 5, since there are only 2 possible outcomes for a given initial state. Then you reversed your position, saying that "Ω can contain zero probability sections". Are you back to your original position again?
and 164
Do you understand that everything in Ω is equiprobably accessible by a single realization of the baseline search, by definition?
Then there is the fact that in your example the target can be had on the initial drop-in, which makes the next level search moot. IOW it isn’t a search for a search at all.
If you're using the term "search" in the conventional way, then most of Dembski's examples aren't searches. Do you call it a "search" when you roll a die a single time? When you get a fortuitous roll in a board game, do you say, "Wow, that was a great search"? In Dembski's framework, a "search" is a random variable. If you can find anywhere that he restricts the definition further, please point it out to me. A "search for a search", then, is a chain of two random variables. Again, if you can find a more restrictive definition in Dembski's work, please show me. My example is a chain of two random variables, and therefore a "search for a search" according to Dembski's usage. If it bothers you that the higher-level search space and the lower-level search space contain some of the same states, then we can easily relabel the states so they're not the same, and the analysis won't be affected at all. It won't be a two-dimensional random walk any more, but it will still be a chain of two random variables and a counterexample to the LCI.R0bb
September 17, 2012
September
09
Sep
17
17
2012
10:24 PM
10
10
24
PM
PDT
Comments 24-27 and 164, for starters. But again you have already read and choked on those, so there is no way you will change your position just by rereading them. Then there is the fact that in your example the target can be had on the initial drop-in, which makes the next level search moot. IOW it isn't a search for a search at all.Joe
September 16, 2012
September
09
Sep
16
16
2012
07:49 AM
7
07
49
AM
PDT
Joe, can you at least tell me the number of the comment in which you told me what the misrepresentation is? I honestly don't know what you're referring to, and if I've misrepresented Dembski in any way, I want to correct it.R0bb
September 16, 2012
September
09
Sep
16
16
2012
07:44 AM
7
07
44
AM
PDT
R0bb, I have already answered that question/ told you what the misrepresentation is. Just because you didn't like the answer or cannot follow along doesn't mean it wasn't answered.Joe
September 16, 2012
September
09
Sep
16
16
2012
06:52 AM
6
06
52
AM
PDT
Your random walk is a strawman because it misrepresents Dembski’s position.
You know, of course, that I'm going to ask the obvious question: How does the random walk example misrepresent Dembski's position. It would help if you would tell us what the alleged misrepresentation is. For example, "Dembski says ____________ but the random walk example seems to assume that Dembski says _____________."R0bb
September 16, 2012
September
09
Sep
16
16
2012
06:38 AM
6
06
38
AM
PDT
As for refuting Darwinian evolution, well there still isn't anything to refute because "that which can be asserted without evidence can be dismissed without evidence" (hitchens).Joe
September 15, 2012
September
09
Sep
15
15
2012
12:05 PM
12
12
05
PM
PDT
R0bb, Your random walk is a strawman because it misrepresents Dembski's position.Joe
September 15, 2012
September
09
Sep
15
15
2012
12:01 PM
12
12
01
PM
PDT
But you can take your strawman and hump it, you seem to be good at that.
Thank you for the kind words, Joe. I love you too. A strawman is a misrepresentation of an opponent's position. Can you quote something from me that misrepresents Dembski's position? WRT your quote from Dembski in #177, that's not the same LCI that he's talking about in the OP. His old LCI says that a "necessity+chance" process cannot increase the net CSI by more than 500 bits. His new LCI says that the active information in a lower-level search cannot exceed the endogenous information in the higher-level search.R0bb
September 15, 2012
September
09
Sep
15
15
2012
10:30 AM
10
10
30
AM
PDT
Thanks, onlooker. You're right about the NFL theorems, which are as problematic as the LCI for ID proponents who try to apply them to nature. Nature, as we observe it, comes nowhere near satisfying the NFL or LCI conditions. In order to construct an ID argument from NFL or the LCI, we have to assume that nature itself arose from a "search" that does satisfy the NFL or LCI conditions. That's where Dembski's appeal to the Principle of Insufficient Reason comes in. Dembski realizes that the LCI can't refute hypotheses regarding more immediate causes, like Darwinian evolution. He says:
Nature is a matrix for expressing already existent information. But the ultimate source of that information resides in an intelligence not reducible to nature. The Law of Conservation of Information, which we explain and justify in this paper, demonstrates that this is the case. Though not denying Darwinian evolution or even limiting its role as an immediate efficient cause in the history of life, this law shows that Darwinian evolution is deeply teleological.
R0bb
September 15, 2012
September
09
Sep
15
15
2012
10:16 AM
10
10
16
AM
PDT
And MathGrrl exposes its ignorance:
There we have CSI (and intelligent design creationism) in a nutshell — we didn’t see it happening, therefore Jesus.
1- As opposed Patrick's position which sez "we didn't see it happening therefor nature didit/ it just happened." 2- intelligent design creationism exists only in the closed minds of the wilfully ignorant. And here is Patrick. 3- Science Patrick- Ya see cause and effect relationships, in accordance with uniformitarianism, tell us that agency and only agency can account for the presence of CSI- SCIENCE, Patrick. As opposed to your posiotion which can only say "anything but ID no matter what!" 4- Nothing about Jesus in anything I posted. IOW Patrick requires falsehoods in order to score brown-nose points with his anti-science buddiesJoe
September 15, 2012
September
09
Sep
15
15
2012
09:23 AM
9
09
23
AM
PDT
Natural causes are therefore incapable of generating CSI. This broad conclusion I call the Law of Conservation of Information, or LCI for short. LCI has profound implications for science. Among its corollaries are the following: (1) The CSI in a closed system of natural causes remains constant or decreases. (2) CSI cannot be generated spontaneously, originate endogenously, or organize itself (as these terms are used in origins-of-life research). (3) The CSI in a closed system of natural causes either has been in the system eternally or was at some point added exogenously (implying that the system though now closed was not always closed). (4) In particular, any closed system of natural causes that is also of finite duration received whatever CSI it contains before it became a closed system.- wm dembski
So this tells us that when we observe CSI in the real-world and did not observe it arising, we can safely infer some agency was present to make it so.Joe
September 15, 2012
September
09
Sep
15
15
2012
05:22 AM
5
05
22
AM
PDT
onlooker:
The most significant in terms of real world applicability is that the NFL theorems apply across all possible fitness landscapes, whereas evolution only has to work in the one we know about (aka reality).
What "evolution" atre you talking about? Blind watchmaker evolution, ie the modern synthesis, doesn't seem to work at all. IOW your equivocation is duly noted as is the total lack of evidnce for your position.Joe
September 15, 2012
September
09
Sep
15
15
2012
05:14 AM
5
05
14
AM
PDT
R0bb, You random walk example is a strawman. Period, end of story. So no, I don't have anything else to discuss with you on this topic. But you can take your strawman and hump it, you seem to be good at that.Joe
September 15, 2012
September
09
Sep
15
15
2012
05:11 AM
5
05
11
AM
PDT
R0bb,
scordova, Chance Ratliff, Dieb, onlooker, and anyone else who might read this comment: Do any of you agree that the following is how the LCI applies to the real world?
As for how the LCI applies in to the real world- It tells us that having directions or a recipe is an easier way to have a successful search than to just do stuff until you get what you want. If you have directions they narrow your search grid, ie they provide active information. The same with a recipe.
I'm responding to this just to let you know that at least one onlooker is still following this discussion. I'm enjoying your explanations. As far as your question goes, no, I don't think that what you quoted is how LCI applies to the real world because I don't think the LCI applies to the real world at all. The "Law of Conservation of Information" is not a law and describes nothing that is conserved. Mark Chu-Carroll has reviewed Dembski's paper and points out a number of serious flaws. The most significant in terms of real world applicability is that the NFL theorems apply across all possible fitness landscapes, whereas evolution only has to work in the one we know about (aka reality). Dembski might be able to construct something like the anthropological argument (privileged universe) from the NFL theorems, but he certainly can't use them to claim that evolution can't happen.onlooker
September 14, 2012
September
09
Sep
14
14
2012
08:32 AM
8
08
32
AM
PDT
Joe, if we can't agree on something as fundamental as the fact that zero-probability outcomes cannot be realized, then I'm afraid we're at an impasse. And I'm not trying to exemplify any of the Dembski/Marks examples. I'm applying the LCI to a different example, namely a two-dimensional random walk, to show that the LCI doesn't always hold. With regards to the random walk model, the equiprobability of the three initial states is an assumption that we're required to make in order to apply the LCI to the model. It's part of the LCI analysis, not necessarily the model itself. But if the model as described in the TSZ post isn't entirely clear, we can describe it in terms of a transition matrix instead. (I'll use a right-stochastic matrix for convenience). According to the LCI's required assumption, the initial probability vector is: 1/3 1/3 1/3 According the definition of a two-dimensional random walk, the transition matrix is: 1/2  0  1/2  0   0  0  1/2  0  1/2  0  0   0  1/2  0  1/2 The resulting probability vector is their product: 1/6 1/6 1/3 1/6 1/6 This is the beginning of a binomial distribution, which is exactly what we expect for a two-dimensional random walk. A binomial distribution is biased toward the center position, which is why the random walk violates the LCI. When Marks and Dembski apply the LCI to examples, the examples are always chosen such that the uniformity of the initial probability vector is preserved in the resulting probability vector. This means that all columns in the transition matrix must have the same sum. Their definition of the LCI does not specify such a requirement, and not all non-contrived models have such a transition matrix, the random walk being a case in point. And as I've already pointed one, Marks and Dembski's example in the S4S paper, to which they do not attempt to apply the LCI, does not have such a transition matrix, and in fact violates the LCI.R0bb
September 14, 2012
September
09
Sep
14
14
2012
08:04 AM
8
08
04
AM
PDT
Wrong again as Dembski s4s search never says any squares are inaccessible. As I said before you are still free to choose a known loser square. But that would defeat the purpose of active information, but you could still do it.
Actually, according to the mathematical model of the process, you cannot.
No R0bb, they just don't expect anyone to, but they did not account for evolutionists. Bottom line R0bb, your random walk example does not exemplify any of the Dembski/ Marks examples.
So in my random walk example, 3 states are accessible via 1 move, and all 5 states are accessible via 2 moves.
Just because you are convinced that does not make it so. Ya see R0bb, you would have to rewrite your example- a complete rewrite. You would need to add a "drop-in" stage- a stage in which any of the initial 3 states can be reached. THEN you would have to change that to a shift to the right or left. But after that drop-in stage only two possible choices remain, not 5.Joe
September 14, 2012
September
09
Sep
14
14
2012
04:28 AM
4
04
28
AM
PDT
Joe:
Baseline search: – Dembski’s dice example: All outcomes in ? accessible in a single “move”. – S4S “squares” example: All outcomes in ? accessible in a single “move”. – My random walk example: All outcomes in ? accessible in a single “move”.
That is false as in your example not all outcomes in Ω are accesible in a single move. In your example the starting point determines which two soaces are accessible in a single move.
The "starting point" in the random walk, i.e. the initial state, is an alternate search, not the baseline search. For a given alternate search, only two outcomes are accessible. But the baseline search is defined such that all outcomes in Ω are equally probable. To say that not all outcomes in Ω are accessible to the baseline search is to deny Dembski and Marks' definition of "baseline search". If you don't approve of their definition, you're free to take it up with them.
Wrong again as Dembski s4s search never says any squares are inaccessible. As I said before you are still free to choose a known loser square. But that would defeat the purpose of active information, but you could still do it.
Actually, according to the mathematical model of the process, you cannot. If a scenario in which a person cannot choose one of the 3 zero-probability squares seems unrealistic to you, then the mathematical model simply doesn't apply to free agents. The S4S example is a stochastic process, and the active information is a random variable. If the stochastic process in question is a person, we have to be careful not to conflate epistemic information with active information. Epistemic information can be ignored or poorly utilized -- we can choose to dig for treasure in a place other than the map tells us. Active information, on the other hand, dictates the probability of each respective outcome being "chosen". If it confers zero probability on a square, then it is impossible for that square to be "chosen". To say otherwise is to say, absurdly, that the zero-probability square doesn't have zero probability.
Wrong again. Ω gets switched to the machine and then to the six.
I'm using Ω in the same sense that Dembski and Marks use the symbol -- it's the sample space of the lower-level search. Are you using it in a different sense? Dembski and Marks use a different symbol for the sample space of the higher-level search. For example, in the S4S paper, they call it M(Ω).
My random walk example: 3 outcomes are accessible in a single “move”, since the high level search returns 1 of 3 states ? ?.
That all depends on where you start, R0bb. And you never specified that. Ya see if you start on one of the 3 initial states then only 2 are accessible with one move and after that only two others are accessible.
All outcomes are accessible via 2 “moves”.
Nope. Only two other outcomes are accessible.
In trying to clarify what you mean by "move", I said before: "By 'move', I assume you’re referring to a realization of a 'search' (which in Dembski’s framework is really just a random variable)." Since you didn't respond, this is still my assumption. The first "move" is the realization of one of the 3 starting states. According to Dembski and Marks' framework, the probability distribution over these 3 states is uniform. The second "move" is the realization of the one of the two states that are accessible to the initial state. So in my random walk example, 3 states are accessible via 1 move, and all 5 states are accessible via 2 moves.R0bb
September 13, 2012
September
09
Sep
13
13
2012
07:40 PM
7
07
40
PM
PDT
R0bb:
Baseline search: - Dembski’s dice example: All outcomes in ? accessible in a single “move”. - S4S “squares” example: All outcomes in ? accessible in a single “move”. - My random walk example: All outcomes in ? accessible in a single “move”.
That is false as in your example not all outcomes in Ω are accesible in a single move. In your example the starting point determines which two soaces are accessible in a single move. IOW, R0bb, you are a liar.
Alternate search: - Dembski’s dice example: All outcomes in ? are accessible in a single “move”. - S4S “squares” example: Some outcomes in ? are inaccessible in a single “move”. - My random walk example: Some outcomes in ? are inaccessible in a single “move”.
Wrong again as Dembski s4s search never says any squares are inaccessible. As I said before you are still free to choose a known loser square. But that would defeat the purpose of active information, but you could still do it.
- Dembski’s dice example: No outcomes in ? are accessible in a single “move”, since the high-level search returns a machine, not a number. All outcomes are accessible via 2 “moves”.
Wrong again. Ω gets switched to the machine and then to the six.
- My random walk example: 3 outcomes are accessible in a single “move”, since the high level search returns 1 of 3 states ? ?.
That all depends on where you start, R0bb. And you never specified that. Ya see if you start on one of the 3 initial states then only 2 are accessible with one move and after that only two others are accessible.
All outcomes are accessible via 2 “moves”.
Nope. Only two other outcomes are accessible.Joe
September 13, 2012
September
09
Sep
13
13
2012
04:27 AM
4
04
27
AM
PDT
Joe:
The comment you chose to ignore for obvious reasons.
I'll naively assume that when you say "for obvious reasons", you're referring to the timestamp on my comment #165, which shows that I submitted it late at night (in the US), as is the case with the comment you're reading. Sometimes I go to sleep rather than immediately respond to further comments, lazy person that I am. Believe me, I know how important it is to be responsive. You, of course, know this also, which is why I know that when you get time, you'll go back and answer the dozens of questions that you have skipped over. When you do, a lot of things will become much more clear. I'll find out what you mean when you say "the equation", whether you realize that you have changed your position on your original objection to my random walk example, where exactly I mangled Dembski's words regarding individuation of outcomes, whether the LCI is a true law if it fails in mathematically valid cases, how Dembski's three CoI theorems apply to the real world, whether you've read Dembski's CoI proofs, whether you're interested in having a civil discussion, etc.R0bb
September 12, 2012
September
09
Sep
12
12
2012
11:25 PM
11
11
25
PM
PDT
Joe:
Ω has to be defined as something you can get to with one move. For a fair six-sided die Ω is 1-6. The example we have been discussing has Ω = 16, each with an equal probability of being searched with one move. In R0bb’s random walk example you have 2 possible positions to be in after one move, yet he sets his Ω at 5, which messes up the equation (GIGO) and sez “I have disproven the LCI”.
Again, what "equation" are you talking about? By "move", I assume you're referring to a realization of a "search" (which in Dembski's framework is really just a random variable). But are you talking about the baseline search, the alternate search, or the two-tier combination of the S4S + alternate search? If you're talking about the baseline search, remember that the definition of the baseline search is based on the Principle of Indifference. The baseline search is defined as an equiprobable distribution over Ω. Therefore, every outcome in Ω is accessible by a single realization of the baseline search. This is always true, by definition. Here's a comparison of what is accessible in 1 or 2 "moves" in the three examples we've discussed: Baseline search: - Dembski's dice example: All outcomes in Ω accessible in a single "move". - S4S "squares" example: All outcomes in Ω accessible in a single "move". - My random walk example: All outcomes in Ω accessible in a single "move". Alternate search: - Dembski's dice example: All outcomes in Ω are accessible in a single "move". - S4S "squares" example: Some outcomes in Ω are inaccessible in a single "move". - My random walk example: Some outcomes in Ω are inaccessible in a single "move". Two-tier search: - Dembski's dice example: No outcomes in Ω are accessible in a single "move", since the high-level search returns a machine, not a number. All outcomes are accessible via 2 "moves". - S4S "squares" example: No outcomes in Ω are accessible in a single "move", since the high-level search returns a distribution over a subset of Ω, rather than an individual square. All but 3 outcomes are accessible via 2 "moves". - My random walk example: 3 outcomes are accessible in a single "move", since the high level search returns 1 of 3 states ∈ Ω. All outcomes are accessible via 2 "moves". Is there anything in the above comparison that you disagree with? If not, can you point to exactly where some outcomes are inaccessible in "one move" in my random walk, but accessible in "one move" in Dembski's examples?R0bb
September 12, 2012
September
09
Sep
12
12
2012
11:19 PM
11
11
19
PM
PDT
Joe:
Again it is the search space, not omega, that is relevant. That is what I am saying.
Yes, that's what you're saying now. But at the beginning of our conversation, it was my definitions of Ω that you were objecting to. Do you see how your more recent statements regarding Ω contradict your earlier objections? Have you retracted those objections? The amount of active information in a search and whether a two-tier search scenario violates the LCI hinge on |Ω|. These are the issues we're discussing, so how can Ω not be relevant?
How do you know the map you have is correct? Or the recipe is what you want?
In my experience, maps and recipes usually have titles, and publishers like Rand McNally and Betty Crocker are known to make maps and recipes that are accurate and match their titles. Is your experience different from this?R0bb
September 12, 2012
September
09
Sep
12
12
2012
11:12 PM
11
11
12
PM
PDT
R0bb:
Ω can be defined to include 5 outcomes or 5 zillion outcomes.
If and only if each of the 5 zillion outcomes can be had in one move. And in YOUR random walk example that is not true therefor your Ω is improperly defined as explained in comment 164. The comment you chose to ignore for obvious reasons. You lose.Joe
September 11, 2012
September
09
Sep
11
11
2012
04:20 AM
4
04
20
AM
PDT
Joe:
Nice spin.
If I didn't accurately reflect what you were saying, then by all means, give us a more accurate summary of your examples.
Again it is the search space, not omega, that is relevant.
Our dispute is over the definitions of Ω in my examples, so how could Ω not be relevant?
That you keep going back to omega tells me you have deceptive intentions.
Your objections to my examples were in regards to my definitions of Ω. In the first example I said |Ω| = 5, and you said it should be 2, because there are only 2 possible outcomes from a given initial state. You have since reversed that position. I'm now trying to get an acknowledgement from you that your objection isn't valid. This is the opposite of deception -- I'm trying to get everything out in the open and stated plainly.
Because, in reality, your example’s omega would be much greater than 5 because there are more zero-probability outcomes you haven’t included.
Yes, exactly! Ω can be defined to include 5 outcomes or 5 zillion outcomes. If you're using standard statistical measures or stochastic process tools, it doesn't matter, other than a question of convenience. But if you're using Dembski's framework, it does matter. The amount of active information depends on how you choose to model Ω. So given a real-world process X and an accurate and mathematically valid model M, how do I determine whether M is a "proper" model? It's easy to object to models on an ad hoc basis, calling them "muddled" or "improper", whatever that means. But how do you generalize your rules of modeling, such that all of Dembski's examples and CoI theorems are "proper"? I'm all ears.R0bb
September 10, 2012
September
09
Sep
10
10
2012
11:25 PM
11
11
25
PM
PDT
Ω has to be defined as something you can get to with one move. For a fair six-sided die Ω is 1-6. The example we have been discussing has Ω = 16, each with an equal probability of being searched with one move. In R0bb's random walk example you have 2 possible positions to be in after one move, yet he sets his Ω at 5, which messes up the equation (GIGO) and sez "I have disproven the LCI".Joe
September 10, 2012
September
09
Sep
10
10
2012
06:35 PM
6
06
35
PM
PDT
R0bb:
It’s certainly true that a search has greater probability of success if we have something that increases its probability of success.
How do you know the map you have is correct? Or the recipe is what you want?Joe
September 10, 2012
September
09
Sep
10
10
2012
02:27 PM
2
02
27
PM
PDT
R0bb:
It’s certainly true that a search has greater probability of success if we have something that increases its probability of success.
Nice spin. R0bb:
I explicitly said “from ?” and “in ?”.
Again it is the search space, not omega, that is relevant. That is what I am saying. That you keep going back to omega tells me you have deceptive intentions. Because, in reality, your example's omega would be much greater than 5 because there are more zero-probability outcomes you haven't included. Also with Dembski's example you can get to ANY square at the first move- even the zero-probability squares, if you so choose to pick a known loser. In your random walk example that is not so as you are very limited in the moves you can make.Joe
September 10, 2012
September
09
Sep
10
10
2012
02:24 PM
2
02
24
PM
PDT
scordova:
If you say that |T|/|Ω| = 1/2 because there are only two states, that's not quite correct because if they are not equiprobable states, you have to modify the way you do the accounting, the fact that P(T) is 5/6 is an indication that they are not equiproable states, and hence the ratio |T|/|Ω| needs to account for this.
|T|/|Ω| is the probability of success of the baseline search. In the baseline search, outcomes are always equiprobable, by definition. So the baseline search for my model has this probability distribution: P("1") = 1/2 P("higher than 1") = 1/2 If that strikes you as problematic, then we're on the same page. That's how Dembski and Marks' framework is defined, and it's a problem. You could argue that Ω shouldn't be defined as {"1", "higher than 1"}, but there is nothing mathematically wrong with defining it that way. Dembski could add a caveat to the definition of "active information" saying that Ω must be properly defined, but then how do we define "properly defined"?R0bb
September 10, 2012
September
09
Sep
10
10
2012
01:46 PM
1
01
46
PM
PDT
Joe:
Ω can contain zero probability sections and those zero probability sections sections are never considered in the equation.
What equation are you talking about? Active information is a function of Ω, and therefore the amount of active information changes depending on whether we include zero-probability outcomes in Ω. By acknowledging that Ω can contain zero-probability outcomes, you've reversed your previous position. You seem to have forgotten that your objection to my random walk example was over the definition of Ω. You said that Ω is 2 instead of 5, since there are only 2 possibilities for a given initial state. Now you're saying that outcomes needn't be possible in order to be included in Ω. Now that you've nullified your objection, is our disagreement settled? If you were to read Dembski and Marks' papers, you would realize that your objection never made sense in the first place. If we were to define Ω such that it includes only the outcomes accessible to a given state, we would have to have a different Ω for each initial state. That's not how Dembski and Marks' framework works, as you'll see for yourself if you read their work.R0bb
September 10, 2012
September
09
Sep
10
10
2012
01:44 PM
1
01
44
PM
PDT
Joe:
So no, we were not talking about their inclusion in the definition of Ω, we were talking about their inclusion in the search space. At least I was and I made that very clear.
The subconversation about including numbers 7+ started with me saying, "Consistently excluding zero-probability outcomes from Ω would yield bizarre results." (Emphasis added.) You answered, "So with the dice example I quoted above does that mean we should also include numbers 7 - infinity?" Are you now saying that you were responding to my point about excluding outcomes from Ω with a question about including outcomes in something other than Ω? And without any indication that you were changing the subject? How does that make sense? Here is the whole thread, with the references to Ω bolded:
R0bb:
Consistently excluding zero-probability outcomes from ? would yield bizarre results.
So with the dice example I quoted above does that mean we should also include numbers 7 - infinity? And of course a coin toss would then have more than two outcomes- in zero G. Wow R0bb, thanks. That clears up my misunderstanding. If we just do whatever we want we can violate the LCI. I bet you ran around the table with your hands in the air, cheering for yourself, once you figured that out. Thumbs high, big guy
Consistently excluding zero-probability outcomes from Ω would yield bizarre results.
So with the dice example I quoted above does that mean we should also include numbers 7 - infinity?
No, it means that if we always exclude zero-probability outcomes from Ω, then in some cases active information will decrease when a search improves. Do you agree? As for including numbers 7 - infinity (by which I assume you mean all integers greater than 6, none of which is actually infinity), is there any reason not to do so, other than inconvenience?
#44 As for including numbers 7 - infinity (by which I assume you mean all integers greater than 6, none of which is actually infinity), is there any reason not to do so, other than inconvenience?
Yes, if you include them then you could never figure out the odds of rolling a "6" with a fair die. As I said you appear not to know anything about the topic that you are trying to discuss. But seeing that you are anonymous you don't care that you look foolish.
As for including numbers 7 - infinity (by which I assume you mean all integers greater than 6, none of which is actually infinity), is there any reason not to do so, other than inconvenience?
Yes, if you include them then you could never figure out the odds of rolling a "6" with a fair die
The probability of rolling a 6 would still be 1/6, and every number higher than 6 would have a probability of zero. The mean, median, variance, etc. are unaffected by inclusion of the higher numbers in Ω. Can you explain what the problem is?
R0bb:
The probability of rolling a 6 would still be 1/6, and every number higher than 6 would have a probability of zero.
Then the numbers higher than 6 are NOT included.
Joe:
Then the numbers higher than 6 are NOT included.
We're talking about inclusion in the definition of Ω. Are you under the impression that if an outcome has zero probability, then it's automatically excluded from the definition of the sample space? I thought we were now in agreement that Ω can contain zero-probability outcomes, since it does so in the example from the S4S paper. If not, there are some questions that I've already asked that will settle this if you'll attempt to answer them. Will you?
R0bb, You keep equating Ω with the searchable space. The two are not the same. Ω can contain zero probability sections and those zero probability sections sections are never considered in the equation. So no, we were not talking about their inclusion in the definition of Ω, we were talking about their inclusion in the search space. At least I was and I made that very clear.
I explicitly said "from Ω" and "in Ω". Where in the thread regarding the inclusion of 7+ did you indicate that you were talking about something other than Ω?R0bb
September 10, 2012
September
09
Sep
10
10
2012
01:41 PM
1
01
41
PM
PDT
Joe:
scordova, Chance Ratliff, Dieb, onlooker, and anyone else who might read this comment: Do any of you agree that the following is how the LCI applies to the real world?
R0bb if you doubt it why don't you just make your case?
Because I think it would help if you saw that others disagree with you. But I doubt that anyone is reading our conversation. As for your understanding of what the LCI tells us:
As for how the LCI applies in to the real world- It tells us that having directions or a recipe is an easier way to have a successful search than to just do stuff until you get what you want. If you have directions they narrow your search grid, ie they provide active information. The same with a recipe.
It's certainly true that a search has greater probability of success if we have something that increases its probability of success. But do you really think that we need the LCI to tell us this tautological fact? The LCI doesn't come into play unless we factor in the "cost" of finding whatever it is that increases the original search's probability of success, in which case the probability of finding the original target goes down (or stays the same), not up, according to LCI.R0bb
September 10, 2012
September
09
Sep
10
10
2012
01:33 PM
1
01
33
PM
PDT
1 2 3 4 8

Leave a Reply