Uncommon Descent Serving The Intelligent Design Community

Tabby’s Star, 3: the business of dealing with Black Swans

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In the Tabby’s Star”extraordinary claims” follow-up thread, one of the usual objector personas tried to pounce on the corrective:

To do so, he tried to counter-pose the concept of Bayesian analysis, then professes to find that a discussion of the difference between risk and radical uncertainty is little more than meaningless verbiage. This is, however, little more than a play to keep going on business as usual in science in the teeth of warning signs:

Where, we must also reckon with the subtleties of signals and noise:

I have responded onward and think it worth the while to headline:

KF, 53 : >>Let me clip Barsch as a public service for those dipping a tentative toe in the frigid, shark-infested waters of a chaotic black swan-prone environment:

Statisticians lament how few business managers think probabilistically. In a world awash with data, statisticians claim there are few reasons to not have a decent amount of objective data for decision making. However, there are some events for which there are no data (they haven’t occurred yet), and there are other events that could happen outside the scope of what we think is possible.

The best quote to sum up this framework for decision making comes from the former US Defense secretary Donald Rumsfeld in February 2002:

“There are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns – there are things we do not know we don’t know.”

Breaking this statement down, it appears Mr. Rumsfeld is speaking about Frequentism, subjective probability (Bayes) and those rare but extreme events coined by Nassim Taleb as “Black Swans”.

. . . . Rumsfeld seems to be saying we can guess the probability of the “known knowns” because they’ve happened before and we have frequent data to support objective reasoning. These “known knowns” are Nassim Taleb’s White Swans. There are also “known unknowns” or things that have never happened before, but have entered our imaginations as possible events (Taleb’s Grey Swans). We still need probability to discern “the odds” of that event (e.g. dirty nuclear bomb in Los Angeles), so Bayes is helpful because we can infer subjective probabilities or “the possible value of unknowns” from similar situations tangential to our own predicament.

Lastly, there are “unknown unknowns”, or things we haven’t even dreamed about (Taleb’s Black Swan). Dr. Nassim Nicholas Taleb labels this “the fourth quadrant” where probability theory has no answers. What’s an illustration of an “unknown unknown”? Dr. Taleb gives us an example of the invention of the wheel, because no one had even though or dreamed of a wheel until it was actually invented. The “unknown unknown” is unpredictable, because—like the wheel—had it been conceived by someone, it would have been already invented.

Rumsfeld’s quote gives business managers a framework for thinking probabilistically. There are “known knowns” for which Frequentism works best, “unknown knowns” for which Bayesian Inference is the best fit, and there is a realm of “unknown unknowns” where statistics falls short, where there can be no predictions. This area outside the boundary of statistics is the most dangerous area, says Dr. Taleb, because extreme events in this sector usually carry large impacts . . .

I add, that’s where you need creative strategic thinkers who can suss out subtle signs and bring them to the table.

Onward, I would lay out a game tableau and/or decision tree for a multi-player, multi-turn game against known players AND “nature” involving not only known patterns but potential high impact outliers brought to the table by the futurists.

Then, we can proceed to a scenario based analysis on the outcome patterns of business as usual vs credible alternatives including black swans as well as internal-to-dynamics catastrophes.

All of this is cast in the context of strategic decision-making. How we apply this to science is to imagine ourselves sitting on the board of a big-ticket journal, and deciding on what to publish; where, your journal also has a research grant budget so it can help shape the path of research. (High status institutions in Science do indirectly influence research funding.)

Then, factor in the prudential principle of least regret on the downside and biggest windfall on the upside.

Now, recommend your bets on an imaginary pool of grant funds for research and your recommendations on what to publish, knowing that this is likely to shift the balance of grant funding for future turns.

What articles and proposals would you entertain, why? What SHOULD you be doing to hedge, to minimise likelihood of big regrets on both the downside and the up-side?

Now, imagine you are the interested public, voting for the pols who provide the Journal’s grant pool and influence priorities in part influenced by the journal’s publications. Which pols will you most likely vote for, why? Which SHOULD you vote for, why?

See the emergence of a problematique of thorny interacting, mutually reinforcing problems that can easily, blindly reinforce policies and ideologies that amplify exposure to destructive black swans?

Now, look back at the OP and particularly the appended charts.

Tell us if a lightbulb goes off.>>

Applicability to ID and many other areas of concern should be obvious. END

Comments
CR, to see why I picked 2 + 3 = 5 as an example of a SET, look at your hand. Assuming you have a normal one. As in, literally staring you in the face. KF PS: Consider, too that there is a real world out there and even here not every thread will be checked every day. PPS: In case it has not registered, credence is subjective in emphasis, warrant is objective: there is good reason to accept as credibly true.kairosfocus
January 26, 2018
January
01
Jan
26
26
2018
04:01 PM
4
04
01
PM
PDT
Not to mention that you still haven’t explained your selection of 2+3=5 as a shining example of a self-evident truth, among all other’s that you considered, was not an concrete example of exposing those ideas to criticism. Is there some reason you haven’t even acknowledge this at all?
This is a simple question. If this choice wasn't arbitrary then what happened here, if not criticism? It seems I've found something quite rare. Did I get a response of several paragraphs? How about one paragraph? Several sentences? Even one sentence? No, no, no and no. I asked a question and KF has no response. At. All. What gives?critical rationalist
January 21, 2018
January
01
Jan
21
21
2018
04:27 PM
4
04
27
PM
PDT
@KF
CR, you are increasingly off-topic, and going over ground where you have been cogently answered any number of times.
If you're mistaken about a subject, couldn't that lead you to be mistaken as to what is relevant in respect to that same subject? Nor is it clear what you mean by "cogently answered". Perhaps we're talking past each other because you consider argument by definition a cogent answer?critical rationalist
January 21, 2018
January
01
Jan
21
21
2018
04:15 PM
4
04
15
PM
PDT
CR @ 10: Yeah, it looks like I goofed. I thought I was defending Bayes and it turns out I was defending Bayesianism. I concede.LocalMinimum
January 21, 2018
January
01
Jan
21
21
2018
04:31 AM
4
04
31
AM
PDT
KF @ 8: True. I was pointing out the sense that the replaced theories still make in light of their replacements, largely for the sake of disputing CR's attempts to reduce everything to nonsense.LocalMinimum
January 21, 2018
January
01
Jan
21
21
2018
04:21 AM
4
04
21
AM
PDT
CR, you are increasingly off-topic, and going over ground where you have been cogently answered any number of times. The post is about a highly significant matter that is not well understood, and that needs to be addressed. Just in the history of this country where I write, people needlessly died twenty years ago in significant part because a black swan was mishandled. KF PS: I will stick with a phrasing that brings out the balance that is found in the soft form knowledge of science, management, the courtroom and day to day life: warranted, credibly true (and reliable) belief. What, for good reason, you would rely on when serious things are in the stakes. The balance of good reason and confidence in reliability is necessary and important.kairosfocus
January 20, 2018
January
01
Jan
20
20
2018
02:41 PM
2
02
41
PM
PDT
@LM
It was a joke, with the intentional absurdity being the inconsideration of the fact that all members of the compliment of the theory must share a finite value amongst themselves, and thus the comically fallacious argument, that you’re trying to use as an actual argument.
The relevance of criticism is independent of its source or their intention in presenting it. It's the content, not the intent or source. As for the absurdity, are you suggesting that the content did not present an accurate model of Bayesianism and is therefore is not a valid criticism? Did the intention of the author somehow make that less accurate? For example, you wrote....
If we’re dealing with a probability of a theory, T, we’re dealing with the set of all scenarios where T is true, PoT. ~PoT is simply the set of all scenarios where T is not true, i.e. that which T would explain is covered by other theories.
What other theories? Would that set not include the hundreds of trillions of other scenarios the Bayesian overloader in #2 generates, every second, where T is not true? Wouldn't that not impact that probability of the set of scenarios where theory T is true?
When and where were quantum theory and relativity proven false?
Where and when did we come up with a working theory of quantum gravity?
Also, our theories are correct, barring laboratory/calculation errors, with respect to the data they properly correlate with. Their scope may be more limited than we realize, i.e. they may be special cases that reflect our frame of observation (Newtonian mechanics vs GR), but future theories shouldn’t be negations so much as generalizations, i.e. our current theories probability spaces are supersets of the probability spaces of future theories, provided we are actually applying proper logic and math and not just making up “good explanations”.
So, science is just about making properly correlated predictions about what we will experience? From this interview on constructor theory...
One of the central philosophical motivations for why I do fundamental physics is that I'm interested in what the world is like; that is, not just the world of our observations, what we see, but the invisible world, the invisible processes and objects that bring about the visible. Because the visible is only the tiny, superficial and parochial sheen on top of the real reality, and the amazing thing about the world and our place in it is that we can discover the real reality. We can discover what is at the center of stars even though we've never been there. We can find out that those cold, tiny objects in the sky that we call stars are actually million-kilometer, white, hot, gaseous spheres. They don't look like that. They look like cold dots, but we know different. We know that the invisible reality is there giving rise to our visible perceptions. That science has to be about that has been for many decades a minority and unpopular view among philosophers and, to a great extent, regrettably even among scientists. They have taken the view that science, just because it is characterized by experimental tests, has to be only about experimental tests, but that's a trap. If that were so, it would mean that science is only about humans and not even everything about humans but about human experience only. It's solipsism. It's purporting to have a rigorous objective world view that only observations count, but ending up by its own inexorable logic as saying that only human experience is real, which is solipsism.
if all you have is the failure of a prediction, you have a negation in the form of ~T. That does not result in a new explanatory theory.critical rationalist
January 20, 2018
January
01
Jan
20
20
2018
01:46 PM
1
01
46
PM
PDT
@KF
CR, scientific facts of observation may well be warranted to moral certainty.
And warranted is distinct from "credence" in what sense?
Self-evident first principles are utterly certain.
Again, nothing you have said actually conflicts with my position. Rather you seem to have some objection to it which you have't actually expanded on. Specifically, you have yet to present a self evident principle that we have a good criticism of. Nor have you explained how we have access to an infallible list of a criticisms that would be guaranteed to uncover errors in those principles at the time they are deemed self evident, and therefore immune to criticism, or an infallible means of when to defer to them or how to interpret them. Not to mention that you still haven't explained your selection of 2+3=5 as a shining example of a self-evident truth, among all other's that you considered, was not an concrete example of exposing those ideas to criticism. Is there some reason you haven't even acknowledge this at all? Pointing to dictionary definitions is not an argument. As for the rest, are you saying you disagree with this...
the objective of science should be to increase our ‘credence” for true theories and that ‘credences’ held by a rational thinker actualy does obey the probability calculus, in practice.
But agree with this?
...what science seeks to maximize is explantory power, not probability or ‘credence’, because rational thinkers do not actually behave that way, in practice.
critical rationalist
January 20, 2018
January
01
Jan
20
20
2018
11:40 AM
11
11
40
AM
PDT
LM, unfortunately, in "generalising," there is often a transformation of senses of terms. So, we end up with theory replacements. KFkairosfocus
January 20, 2018
January
01
Jan
20
20
2018
09:55 AM
9
09
55
AM
PDT
CR @ 2: It was a joke, with the intentional absurdity being the inconsideration of the fact that all members of the compliment of the theory must share a finite value amongst themselves, and thus the comically fallacious argument, that you're trying to use as an actual argument. If we're dealing with a probability of a theory, T, we're dealing with the set of all scenarios where T is true, PoT. ~PoT is simply the set of all scenarios where T is not true, i.e. that which T would explain is covered by other theories. When and where were quantum theory and relativity proven false? Also, our theories are correct, barring laboratory/calculation errors, with respect to the data they properly correlate with. Their scope may be more limited than we realize, i.e. they may be special cases that reflect our frame of observation (Newtonian mechanics vs GR), but future theories shouldn't be negations so much as generalizations, i.e. our current theories probability spaces are supersets of the probability spaces of future theories, provided we are actually applying proper logic and math and not just making up "good explanations".LocalMinimum
January 20, 2018
January
01
Jan
20
20
2018
07:37 AM
7
07
37
AM
PDT
CR, scientific facts of observation may well be warranted to moral certainty. Self-evident first principles are utterly certain. As you will recall, I have pointed out that scientific theories are explanatory frameworks that on track record and per the logic of inference to best current explanation are only warranted to be reliable within a range of validity, though they may, just may be true in crucial part. That may be is the difference from models which are known to be useful fictions. No probability of truthfulness can be objectively assigned to a theory in the sense that is commonly used as in Quantum theory etc, as opposed to a synonym for a hypothesis. Bayesian approaches are about hypotheses and subjective assignment of probabilities which may be revised on evidence. Frequentist ones are about patterns of observation per sampling of populations. Radical uncertainty obtains when we have only noisy signals at best or even may face out of the blue sky unknown unknowns. Just, we know on track record that we have enough ignorance that we can be caught out like that. KFkairosfocus
January 19, 2018
January
01
Jan
19
19
2018
10:59 PM
10
10
59
PM
PDT
Also, see this video, which argues that we can competly dispense with probablity in physics.critical rationalist
January 19, 2018
January
01
Jan
19
19
2018
06:53 PM
6
06
53
PM
PDT
Another example? During the 2012 OPERA experiment in Switzerland, neutrinos were detected in a way that indicated they were traveling faster than the speed of light. Did this immediately refute Einstein’s theory that nothing travels faster than C? No, it did not. This is because we did not have a theory that explained why neutrinos were traveling faster than the speed of light in the OPERA experiment, but not others. IOW, the negation of a theory does not produce a new explantory theory. Before Einstein’s theory was overthrown, a new theory would be needed to explain the same phenomena at least as well, in addition to the additional phenomena of the unique OPERA observations, and we didn’t have one. Eventually, it was discovered the theory that the experiment was set up in such as way that observations would be accurate was false, rather than the theory that nothing travels faster than the speed of light in real space. ID, as a negation of neo-Darwinism, doesn’t explain the same phemona, remotely as well, let alone any specific descrepencies we observe. “That’s just what some designer must have wanted” is not such an explanation. Nor is some abstract authorative source that has no defined limitations. To quote Popper, “Every ‘good’ scientific theory is a prohibition: it forbids certain things to happen. The more a theory forbids, the better it is.”critical rationalist
January 19, 2018
January
01
Jan
19
19
2018
06:48 PM
6
06
48
PM
PDT
The problem is, this simply doesn’t add up. Specifically, you seem to be suggesting that the objective of science should be to increase our ‘credence” for true theories and that ‘credences’ held by a rational thinker actualy does obey the probillity calculus, in practice. But when we try to take this seriously, for the purpose of criticism, it fails. Example? Take some explantory theory T, such as the sun is powered by nuclear fusion. The negation of T (~T), the sun is not powered by nuclear fusion, is not an explantion in the least. It’s merely the negation of T, which doesn’t result in a new explantory theory. Now, with that in mind, let’s suppose it actually is possible, for the sake of argument, to quantify this property that science is supposed to maximize. Let’s call that ‘q’. If explanatory theory T had some amount of q, then ~T has no q at all, as opposed to the 1-q that the probability calculus would require if q actually represents a probability. It’s a category error, of sorts, because ~T, not representing a explanatory theory, has none of what gave T a probality. Furthermore, take the conjunction of two mutually inconsistent explantory theoires (T1 & T2), such as quantum theory and relativity. Both of them are provably false. So, their conjunction would have a probability of zero. Yet, the conjunction of those two theoires is the best understanding we have of the world, which is expontationally far from nothing. Finally, if we expect that all of our best explantory theories of fundamental physics will eventually be superseded, what we would believe today would eventually become negations of the very future theoires that supersede them. However, it’s still those false explanatory theoires, not true negations, that represent our deepest knowege of physics. What will have happened to all the probablity they supposedly have today? IOW, what science seeks to maximize is explantory power, not probability or ‘credence’, because rational thinkers do not actually behave that way, in practice.critical rationalist
January 19, 2018
January
01
Jan
19
19
2018
06:23 PM
6
06
23
PM
PDT
@KF From another thread, why theories do not have probabilities.
https://smbc-comics.com/index.php?id=4127 [Speaker] According to Bayesianism, every theory, no matter how ridiculous, has some probability of being true. But the sum of all theories multiplied by their probability must still be one. Therefore, I’ve created a new device: the Bayesian overloader. Start with some very probable theory that nobody likes. For example, “I will die someday” Now, we set the overloader to generate opposing theories, like “everyone living will not die” or “only pumpkins die” or “nobody has as ever died – theyre all just sleeping” Because all of these theories get some slice of the probability pie, so long as we generate theories fast enough, the undesirable theory becomes less and less true. The overloader creates hundreds of-trillions of theories every second. We wait about thirty seconds, then BAM! The initial theory is now vanishingly unlikely! And thus I am immortal! [Speaker stabs himself] [Dies] [Audience member] See, that’s why I’m a frequentist.
critical rationalist
January 19, 2018
January
01
Jan
19
19
2018
04:56 PM
4
04
56
PM
PDT
Tabby’s Star, 3: the business of dealing with Black Swanskairosfocus
January 19, 2018
January
01
Jan
19
19
2018
01:37 AM
1
01
37
AM
PDT

Leave a Reply