In the Induction thread, we have continued to explore inductive logic, science and ID vs Evolutionary Materialism.
Among the key points raised (with the help of Hilary Putnam) is the issue that while Popper sees himself as opposed to induction, it is arguable that instead he has actually (against his intent) brought it back in once we reckon with the need for trusted theories to be used in practical contexts, and once we explore the implications of corroboration and success “so far” with “severe testing.”
As comment 48 observed:
>> . . . Hilary Putnam [notes, in an article on the Corroboration of theories], regarding Popper’s corroboration and inductive reasoning:
. . . most readers of Popper read his account of corroboration as an account of something like the verification of theories, in spite of his protests. In this sense, Popper has, contre lui [ ~ against his intent] a theory of induction . . . .
Standard ‘inductivist’ accounts of the confirmation’ of scientific theories go somewhat like this: Theory implies prediction (basic sentence, or observation sentence); if predic-tion is false, theory is falsified; if sufficiently many predictions are true, theory is confirmed. For all his attack on inductivism, Popper’s schema is not so different: Theory implies prediction (basic sentence); if prediction is false, theory is falsified; if sufficiently many predictions are true, and certain further conditions are fulfilled, theory is highly corroborated.
Moreover, this reading of Popper does have certain support. Popper does say that the ‘surviving theory’ is accepted—his account is, therefore, an account of the logic of accepting theories [–> tantamount to inductive support and confident trust in results deemed reliable enough to put to serious work] . . .
Yes, Popper points to the quasi-infinite set of possible theories and declares that the best is the most improbable, most subject to severe testing that survives thus far. But the point is, such theories are routinely seen as empirically reliable and are put to work, being trusted to be “good enough for government work.”>>
However, testing and falsification pose further difficulties, and it is worth highlighting such by headlining comment 50 in the thread:
>>The next “logical” question is how inductive reasoning (modern sense) applies to scientific theories and — HT Lakatos and Kuhn, Feyerabend and Putnam — research programmes.
First, we need to examine the structure of scientific predictions, where:
we have theory T + auxiliary hypotheses (and “calibration”) about observation and required instruments etc AI + auxiliary statements framing and modelling initial, intervening and boundary conditions [in a world model], AM, to yield predicted or explained observations, P/E:
T + AI + AM –> P/E
We compare observations, O (with AI again acting), to yield explanatory gap, G:
P/E – (O + AI) –> G = g
In an ideally successful or “progressive” theory or paradigm, G will be 0 [zero], but in almost all cases there will be anomalies; scientific theories generally live with an explanatory/predictive deficit, g for convenience. This gives meat to the bones of Lakatos’ pithy observation that theories are born, live and die refuted.
However, when a new theory better explains persistent anomalies and has some significant success with otherwise unexplained phenomena, and this occurs for some time, this allows its champions to advance. {Let us insert an infographic:}
We then see dominant and perhaps minor schools of thought, with research programmes that coalesce about the key successes. Where also scope of explanation counts, i.e. a theory T1 may have wider scope of generally regarded success, but has its deficit g1 greater than g2, that of a theory T2 of narrower scope.
Where investigatory methods are more linked by family resemblance than by any global, one size fits all and only Science method.
This picture instantly means that Popper’s criterion of falsification is very hard to test, as, first, observations are themselves coloured by instrumental issues (including eyeball, mark 1 etc). Second, key theoretical claims of a given theory Tk, are usually not directly predictive/ explanatory of observations, they are associated with a world state model AMk, that is generally far less tightly held than Tk. In Lakatos’ terms, we have an armour-belt that protects the core theory.
As a consequence, the battlecruisers at Jutland principle applies.
That is, unless there is a key vulnerability of design or of procedures that allows a lucky shot to detonate the magazines deep in the bowels of the research programme, it has to be battered and reduced to sinking condition — it has to become a “degenerative” research programme in competition with advancing “progressive” ones. Which means that a competitor Tm has to have the resources to initiate and sustain that battering while itself being better protected against the counter-barrage.
And when a paradigm and research programme is deeply embedded in cultural power circles and their agendas, it can often dominate technical discussion, lock out controversial alternatives and drive them to the margins. That means it is going to be hard for such to hold prestigious scholars and attract graduate students. But, there can arise times of crisis when guerrilla, fringe schools can find sanctuaries and perhaps build up enough following that a time of crisis emerges.
And so, the succession of scientific theories, paradigms and research programmes is seldom smooth, and is plainly deeply intertwined with institutional and general politics, especially where grant-making is an issue.
This complex, messy picture fits well with the sorts of scientific quarrels that have been a notorious part of the history of modern science. It resonates with the story of Economics over the past century or a bit more, it fits with psychology, it speaks to the ongoing anthropogenic global warming controversy and it speaks straight to the controversies surrounding design thought.
For, ID is a narrow scope paradigm that addresses key persistent anomalies in the cluster of origins theories that fit under evolutionary materialist scientism. However, the dominant paradigm is institutionally and politically much stronger. So, ID is a marginal, often marginalised and even stereotyped and scapegoated, fairly narrow scope school of thought (at least in terms of the guild of scholars). However, it seems to be targetting key vulnerabilities of method and raises a potentially transforming insight: designing intelligence is real, often acts through directing configurations in ways that are complex, fine-tuned and information-rich, and so can be reliably detected when such traces or signs are present.
{Video:}
Thus, the inductive challenge posed by ID is that of inference to the best current explanation, on empirically grounded, reliable sign. Backed up, by the analysis of the challenge of search resources for blind solar system or observed cosmos scale search in very large configuration spaces.
This is a powerful point, and one unanswered; likely, one that cannot be answered on evolutionary materialistic scientism. But that does not prevent institutional power from holding off the threat for a long time.
However, eventually, there will be a tipping point.
Which may be nearer than we think.
Walker and Davies, in a recent article, hint at just how near this may be:
In physics, particularly in statistical mechanics, we base many of our calculations on the assumption of metric transitivity, which asserts that a system’s trajectory will eventually [–> given “enough time and search resources”] explore the entirety of its state space – thus everything that is phys-ically possible will eventually happen. It should then be trivially true that one could choose an arbitrary “final state” (e.g., a living organism) and “explain” it by evolving the system backwards in time choosing an appropriate state at some ’start’ time t_0 (fine-tuning the initial state). In the case of a chaotic system the initial state must be specified to arbitrarily high precision. But this account amounts to no more than saying that the world is as it is because it was as it was, and our current narrative therefore scarcely constitutes an explanation in the true scientific sense.
We are left in a bit of a conundrum with respect to the problem of specifying the initial conditions necessary to explain our world. A key point is that if we require specialness in our initial state (such that we observe the current state of the world and not any other state) metric transitivity cannot hold true, as it blurs any dependency on initial conditions – that is, it makes little sense for us to single out any particular state as special by calling it the ’initial’ state. If we instead relax the assumption of metric transitivity (which seems more realistic for many real world physical systems – including life), then our phase space will consist of isolated pocket regions [–> islands . . . ] and it is not necessarily possible to get to any other physically possible state (see e.g. Fig. 1 for a cellular automata example).
[–> or, there may not be “enough” time and/or resources for the relevant exploration, i.e. we see the 500 – 1,000 bit complexity threshold at work vs 10^57 – 10^80 atoms with fast rxn rates at about 10^-13 to 10^-15 s leading to inability to explore more than a vanishingly small fraction on the gamut of Sol system or observed cosmos . . . the only actually, credibly observed cosmos] {Search challenge:}
Thus the initial state must be tuned to be in the region of phase space in which we find ourselves [–> notice, fine tuning], and there are regions of the configuration space our physical universe would be excluded from accessing, even if those states may be equally consistent and permissible under the microscopic laws of physics (starting from a different initial state). Thus according to the standard picture, we require special initial conditions to explain the complexity of the world, but also have a sense that we should not be on a particularly special trajectory to get here (or anywhere else) as it would be a sign of fine–tuning of the initial conditions. [ –> notice, the “loading”] Stated most simply, a potential problem with the way we currently formulate physics is that you can’t necessarily get everywhere from anywhere (see Walker [31] for discussion). [“The “Hard Problem” of Life,” June 23, 2016, a discussion by Sara Imari Walker and Paul C.W. Davies at Arxiv.]
We live in interesting times.>>
Clearly, the issues of inductive logic, reasoning and science are pivotal to understanding the key design inference on inductive signs. The wider picture of how paradigms and research programmes rise and fall then sets that into a wider context that moves beyond simplistic falsificationism.
And these, we urgently need to discuss together on all sides of the design debates. END