In the Tabby’s Star”extraordinary claims” follow-up thread, one of the usual objector personas tried to pounce on the corrective:
To do so, he tried to counter-pose the concept of Bayesian analysis, then professes to find that a discussion of the difference between risk and radical uncertainty is little more than meaningless verbiage. This is, however, little more than a play to keep going on business as usual in science in the teeth of warning signs:
Where, we must also reckon with the subtleties of signals and noise:
I have responded onward and think it worth the while to headline:
Statisticians lament how few business managers think probabilistically. In a world awash with data, statisticians claim there are few reasons to not have a decent amount of objective data for decision making. However, there are some events for which there are no data (they haven’t occurred yet), and there are other events that could happen outside the scope of what we think is possible.
The best quote to sum up this framework for decision making comes from the former US Defense secretary Donald Rumsfeld in February 2002:
“There are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns – there are things we do not know we don’t know.”
Breaking this statement down, it appears Mr. Rumsfeld is speaking about Frequentism, subjective probability (Bayes) and those rare but extreme events coined by Nassim Taleb as “Black Swans”.
. . . . Rumsfeld seems to be saying we can guess the probability of the “known knowns” because they’ve happened before and we have frequent data to support objective reasoning. These “known knowns” are Nassim Taleb’s White Swans. There are also “known unknowns” or things that have never happened before, but have entered our imaginations as possible events (Taleb’s Grey Swans). We still need probability to discern “the odds” of that event (e.g. dirty nuclear bomb in Los Angeles), so Bayes is helpful because we can infer subjective probabilities or “the possible value of unknowns” from similar situations tangential to our own predicament.
Lastly, there are “unknown unknowns”, or things we haven’t even dreamed about (Taleb’s Black Swan). Dr. Nassim Nicholas Taleb labels this “the fourth quadrant” where probability theory has no answers. What’s an illustration of an “unknown unknown”? Dr. Taleb gives us an example of the invention of the wheel, because no one had even though or dreamed of a wheel until it was actually invented. The “unknown unknown” is unpredictable, because—like the wheel—had it been conceived by someone, it would have been already invented.
Rumsfeld’s quote gives business managers a framework for thinking probabilistically. There are “known knowns” for which Frequentism works best, “unknown knowns” for which Bayesian Inference is the best fit, and there is a realm of “unknown unknowns” where statistics falls short, where there can be no predictions. This area outside the boundary of statistics is the most dangerous area, says Dr. Taleb, because extreme events in this sector usually carry large impacts . . .
I add, that’s where you need creative strategic thinkers who can suss out subtle signs and bring them to the table.
Onward, I would lay out a game tableau and/or decision tree for a multi-player, multi-turn game against known players AND “nature” involving not only known patterns but potential high impact outliers brought to the table by the futurists.
Then, we can proceed to a scenario based analysis on the outcome patterns of business as usual vs credible alternatives including black swans as well as internal-to-dynamics catastrophes.
All of this is cast in the context of strategic decision-making. How we apply this to science is to imagine ourselves sitting on the board of a big-ticket journal, and deciding on what to publish; where, your journal also has a research grant budget so it can help shape the path of research. (High status institutions in Science do indirectly influence research funding.)
Then, factor in the prudential principle of least regret on the downside and biggest windfall on the upside.
Now, recommend your bets on an imaginary pool of grant funds for research and your recommendations on what to publish, knowing that this is likely to shift the balance of grant funding for future turns.
What articles and proposals would you entertain, why? What SHOULD you be doing to hedge, to minimise likelihood of big regrets on both the downside and the up-side?
Now, imagine you are the interested public, voting for the pols who provide the Journal’s grant pool and influence priorities in part influenced by the journal’s publications. Which pols will you most likely vote for, why? Which SHOULD you vote for, why?
See the emergence of a problematique of thorny interacting, mutually reinforcing problems that can easily, blindly reinforce policies and ideologies that amplify exposure to destructive black swans?
Now, look back at the OP and particularly the appended charts.
Tell us if a lightbulb goes off.>>
Applicability to ID and many other areas of concern should be obvious. END