Three approaches are analysed—frequencies, propensities or degrees of belief, and a fourth is proposed, degree of support:

… our four theories offer divergent advice on how to figure out the values of probabilities. The first three interpretations (frequency, propensity

and confidence) try to make probabilities things we can observe – through counting, experimentation or introspection. By contrast, degrees of supportseem to be what philosophers call ‘abstract entities’ – neither in the world nor in our minds. While we know that a coin is symmetrical by observation, we know that the proposition ‘this coin is symmetrical’ supports the propositions ‘this coin lands heads’ and ‘this coin lands tails’ to equal degrees in the same way we know that ‘this coin lands heads’ entails ‘this coin lands heads or tails’: by thinking.Nevin Climenhaga, “The concept of probability is not as simple as you think” atAeon

Hmmm. “Thinking” is a risky strategy suggestion in a world where consciousness is supposed to be an illusion or else a material thing —unless, of course, your coffee mug has it too.

No legerdemain around probability is going to make most naturalism (nature is all there is, often called “materialism) claims true. But not for lack of trying.

*See also:* Probability of a single protein forming by chance

and

Confusing Probability: The “Every-Sequence-Is-Equally-Improbable” Argument

Follow UD News at Twitter!

I guess I don’t see how this article says anything new, but maybe I’m missing something. Probability is always in respect to some assumptions or known information.

“entails ‘this coin lands heads or tails’: by thinking.”

Well, no. If we have only a small sample size, we can’t tell the difference between the observation “the packages are square’ and “ALL packages are ALWAYS square”.

Some of the discussion of signal vs noise is like that: does the system produce spurious signals along with the information-containing signals? If we don’t know how often spurious signals are produced by the system itself, then most attempts at filtering will in fact filter out some of the Information. Or worse, we will score some of the Noise as Information and ACT on the value assigned to the Noise.

I would also note that the VERY experienced man who taught my 6 Sigma class told us that whilst working on automated labeling of radio (speedometer? faceplates for automobiles in Detroit (I forget which specific automaker), the Sampling and Digital Process Control guys finally concluded that the process was inherently prone to 5% (or something) misprints, and NO AMOUNT of fidgeting with the hardware was gonna change that. So the answer was 100% MANUAL inspection and discard anything the human didn’t like.

He told us this war story so that when we went back to the real world we would NOT go into a Process Improvement effort with the FIXED idea that Statistical Control was gonna be the answer. And of course there is always the problem of False Positives, where the evaluation INCORRECTLY scores one of the coin flips in a system where the coin actually produces 51 heads for every 49 tails in its normal mode of operation.

An interesting critique of Climenhaga’s interpretation of probability is provided by William Briggs (Statistician to the Stars!) on his website: http://wmbriggs.com/post/26554/

Where does the randomness in nature come from? Sure, at the end of the day it comes from quantum-level processes. But why are those processes random or are they truly random? How come natures gives us just one outcome out of N possibilities for a wave function, while at the same time nature computes all those N outcomes before giving us one? It is almost like there is a super-computer “out there” making the calculations, and then “materializing” just one outcome out of N for us to enjoy…

I’m struggling to understand what this new approach is: it doesn’t even seem coherent. I hope this is because the author has explained things really badly: the example with Adam blind-folding Beth, Charles and Dave is a mess, because the author doesn’t seem to appreciate that under all interpretation of probability, the probability changes when the information changes.

Weirder is this:

Which is a straight-up Bayesian interpretation of probability.

He also writes this:

Which is flat-out wrong: we observe events, and from those infer (in one way or another) probabilities. So probabilities are statements that are used to make predictions about things we can observe, but we cannot observe the probabilities directly.

https://darwinfalsified.wordpress.com/

The linked article examines the fundamental assumption behind the theory of evolution by which, just because each new organism has subtly different genome than its parents, this will eventually lead to genomes with information for previously nonexistent, functional and niche occupying structures, such as organs. For that purpose, first it compares two libraries. One library contains ‘realized genomes’ — i.e. all genomes that could have been formed during the entire history of life on Earth. Another library contains ‘unrealized genomes’ — i.e. all possible genomes that a genome of a certain size allows, reduced by the number of realized genomes. Finally, it calculates the waiting time required for finding the genome with information for a single and super primitive bio-structure and it concludes that it would take 10^871 years for that to happen.

BO’H:

there are divergent views on probability out there, it seems the author is fishing for a sort of degree of epistemic warrant issue? I see:

“Here, probabilities are understood as relations of evidential support between propositions. ‘The probability of X given Y’ is the degree to which Y supports the truth of X.”In the ABCD example, he keeps pointing to subjects and their views, e.g. D reports his perception 60%. B is holding that by definition a coin or 2-sided die has 50-50 H-T odds, but a run of four tosses can fluctuate widely. C seems to be looking at the particular run of the self-destructing coin, and sees if 3 of 4 flips were H then the odds of the first being that are 75%.

I see onward:

KF

I agree with Bob at 5. I don’t think there is much, if anything, to this article.

I read the article linked at 3. I agree with the statement,

It’s always “the probability of x” given certain knowledge or assumptions. The source of the knowledge can vary, and in fact the level of certainty about the knowledge could vary, but our statements about probability always depend on some propositions about the background.

This seems non-controversial, and I don’t see what “new approach” is being offered here.

H, it is a semi popular article introducing the epistemological view on probability. KF