Uncommon Descent Serving The Intelligent Design Community

A new approach to probability?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Three approaches are analysed—frequencies, propensities or degrees of belief, and a fourth is proposed, degree of support:

… our four theories offer divergent advice on how to figure out the values of probabilities. The first three interpretations (frequency, propensity and confidence) try to make probabilities things we can observe – through counting, experimentation or introspection. By contrast, degrees of support seem to be what philosophers call ‘abstract entities’ – neither in the world nor in our minds. While we know that a coin is symmetrical by observation, we know that the proposition ‘this coin is symmetrical’ supports the propositions ‘this coin lands heads’ and ‘this coin lands tails’ to equal degrees in the same way we know that ‘this coin lands heads’ entails ‘this coin lands heads or tails’: by thinking. Nevin Climenhaga, “The concept of probability is not as simple as you think” at Aeon

Hmmm. “Thinking” is a risky strategy suggestion in a world where consciousness is supposed to be an illusion or else a material thing —unless, of course, your coffee mug has it too.

No legerdemain around probability is going to make most naturalism (nature is all there is, often called “materialism) claims true. But not for lack of trying.

See also: Probability of a single protein forming by chance

and

Confusing Probability: The “Every-Sequence-Is-Equally-Improbable” Argument

Follow UD News at Twitter!

Comments
H, it is a semi popular article introducing the epistemological view on probability. KFkairosfocus
March 1, 2019
March
03
Mar
1
01
2019
09:18 AM
9
09
18
AM
PDT
I read the article linked at 3. I agree with the statement,
This also implies, as I believe, there is no such thing as “The probability of X” on its own; i.e. there is no unconditional probability.
It's always "the probability of x" given certain knowledge or assumptions. The source of the knowledge can vary, and in fact the level of certainty about the knowledge could vary, but our statements about probability always depend on some propositions about the background. This seems non-controversial, and I don’t see what “new approach” is being offered here.hazel
March 1, 2019
March
03
Mar
1
01
2019
06:37 AM
6
06
37
AM
PDT
I agree with Bob at 5. I don't think there is much, if anything, to this article.hazel
March 1, 2019
March
03
Mar
1
01
2019
05:33 AM
5
05
33
AM
PDT
BO'H: there are divergent views on probability out there, it seems the author is fishing for a sort of degree of epistemic warrant issue? I see: "Here, probabilities are understood as relations of evidential support between propositions. ‘The probability of X given Y’ is the degree to which Y supports the truth of X." In the ABCD example, he keeps pointing to subjects and their views, e.g. D reports his perception 60%. B is holding that by definition a coin or 2-sided die has 50-50 H-T odds, but a run of four tosses can fluctuate widely. C seems to be looking at the particular run of the self-destructing coin, and sees if 3 of 4 flips were H then the odds of the first being that are 75%. I see onward:
The degree-of-support interpretation incorporates what’s right about each of our first three approaches while correcting their problems. It captures the connection between probabilities and degrees of confidence. It does this not by identifying them – instead, it takes degrees of belief to be rationally constrained by degrees of support. The reason I should be 50 per cent confident that a coin lands heads, if all I know about it is that it is symmetrical, is because this is the degree to which my evidence supports this hypothesis. Similarly, the degree-of-support interpretation allows the information that the coin landed heads with a 75 per cent frequency to make it 75 per cent probable that it landed heads on any particular toss. It captures the connection between frequencies and probabilities but, unlike the frequency interpretation, it denies that frequencies and probabilities are the same thing. Instead, probabilities sometimes relate claims about frequencies to claims about specific individuals.
KFkairosfocus
March 1, 2019
March
03
Mar
1
01
2019
02:41 AM
2
02
41
AM
PDT
https://darwinfalsified.wordpress.com/ The linked article examines the fundamental assumption behind the theory of evolution by which, just because each new organism has subtly different genome than its parents, this will eventually lead to genomes with information for previously nonexistent, functional and niche occupying structures, such as organs. For that purpose, first it compares two libraries. One library contains ‘realized genomes’ — i.e. all genomes that could have been formed during the entire history of life on Earth. Another library contains ‘unrealized genomes’ — i.e. all possible genomes that a genome of a certain size allows, reduced by the number of realized genomes. Finally, it calculates the waiting time required for finding the genome with information for a single and super primitive bio-structure and it concludes that it would take 10^871 years for that to happen.forexhr
March 1, 2019
March
03
Mar
1
01
2019
01:40 AM
1
01
40
AM
PDT
I'm struggling to understand what this new approach is: it doesn't even seem coherent. I hope this is because the author has explained things really badly: the example with Adam blind-folding Beth, Charles and Dave is a mess, because the author doesn't seem to appreciate that under all interpretation of probability, the probability changes when the information changes. Weirder is this:
The degree-of-support interpretation incorporates what’s right about each of our first three approaches while correcting their problems. It captures the connection between probabilities and degrees of confidence. It does this not by identifying them – instead, it takes degrees of belief to be rationally constrained by degrees of support. The reason I should be 50 per cent confident that a coin lands heads, if all I know about it is that it is symmetrical, is because this is the degree to which my evidence supports this hypothesis.
Which is a straight-up Bayesian interpretation of probability. He also writes this:
Because they turn probabilities into different kinds of entities, our four theories offer divergent advice on how to figure out the values of probabilities. The first three interpretations (frequency, propensity and confidence) try to make probabilities things we can observe – through counting, experimentation or introspection.
Which is flat-out wrong: we observe events, and from those infer (in one way or another) probabilities. So probabilities are statements that are used to make predictions about things we can observe, but we cannot observe the probabilities directly.Bob O'H
March 1, 2019
March
03
Mar
1
01
2019
01:15 AM
1
01
15
AM
PDT
Where does the randomness in nature come from? Sure, at the end of the day it comes from quantum-level processes. But why are those processes random or are they truly random? How come natures gives us just one outcome out of N possibilities for a wave function, while at the same time nature computes all those N outcomes before giving us one? It is almost like there is a super-computer "out there" making the calculations, and then "materializing" just one outcome out of N for us to enjoy...Eugene
February 28, 2019
February
02
Feb
28
28
2019
09:31 PM
9
09
31
PM
PDT
An interesting critique of Climenhaga's interpretation of probability is provided by William Briggs (Statistician to the Stars!) on his website: http://wmbriggs.com/post/26554/MikeW
February 28, 2019
February
02
Feb
28
28
2019
04:14 PM
4
04
14
PM
PDT
"entails ‘this coin lands heads or tails’: by thinking." Well, no. If we have only a small sample size, we can't tell the difference between the observation "the packages are square' and "ALL packages are ALWAYS square". Some of the discussion of signal vs noise is like that: does the system produce spurious signals along with the information-containing signals? If we don't know how often spurious signals are produced by the system itself, then most attempts at filtering will in fact filter out some of the Information. Or worse, we will score some of the Noise as Information and ACT on the value assigned to the Noise. I would also note that the VERY experienced man who taught my 6 Sigma class told us that whilst working on automated labeling of radio (speedometer? faceplates for automobiles in Detroit (I forget which specific automaker), the Sampling and Digital Process Control guys finally concluded that the process was inherently prone to 5% (or something) misprints, and NO AMOUNT of fidgeting with the hardware was gonna change that. So the answer was 100% MANUAL inspection and discard anything the human didn't like. He told us this war story so that when we went back to the real world we would NOT go into a Process Improvement effort with the FIXED idea that Statistical Control was gonna be the answer. And of course there is always the problem of False Positives, where the evaluation INCORRECTLY scores one of the coin flips in a system where the coin actually produces 51 heads for every 49 tails in its normal mode of operation.vmahuna
February 28, 2019
February
02
Feb
28
28
2019
12:22 PM
12
12
22
PM
PDT
I guess I don't see how this article says anything new, but maybe I'm missing something. Probability is always in respect to some assumptions or known information.hazel
February 28, 2019
February
02
Feb
28
28
2019
12:00 PM
12
12
00
PM
PDT

Leave a Reply