- Share
-
-
arroba
In the current VJT thread on 31 scientists who did not follow methodological naturalism, it has been noteworthy that objectors have studiously avoided addressing the basic warrant for the design inference. Since this is absolutely pivotal but seems to be widely misunderstood or even dismissed without good reason, it seems useful to summarise this for consideration.
This having been done at comment 170 in the thread, it seems further useful to headline it and invite discussion:
_________________
>>F/N: It seems advisable to again go back to basics, here, inductive reasoning and why it has significance in scientific work; which then has implications for the design inference.
_________________
Let us reflect, again, on basics. END
A good point to begin is IEP in its article on induction and deduction (which gives the modern view on induction . . . the old view of generalisation has been superseded):
In short, deductive arguments infer step by step conclusions through criteria of entailment relative to premises. Inductive arguments instead pivot on providing good reason for supporting a conclusion, even absent deductive validity multiplied by truth of premises leading to sound argument and logically certain conclusions.
Of course, the truthfulness of premises and how such are to be established is always an issue; especially as infinite logical regress of successive challenges to premises is futile and circularity of such a chain is also futile.
In part, we appeal to the fund of our experience and assert plausible claims. We may put up self-evident claims, on grounds that to deny X immediately, patently lands in absurdity so we go with the point that once we understand X we see it is so on pain of absurdity. (Think here on the consequences of distinct identity of say a bright red ball, A, on a table and the dichotomy of the world W = {A|~A}.)
But in many cases, we accept claims based on induction, e.g. ravens are black, per reliable empirically grounded generalisation. Where, obviously, we may modify should we encounter a white or green one, etc. That is, encountering some x such that x is non-black but also a Raven would disconfirm the generalisation that Ravens are black. (Famously, this happened with the black swans of Australia; for, Swans are white.) Likewise, it is not a certainty beyond possible doubt that there will be a sunrise on the morrow.
A related concept is abduction, where on a cluster of otherwise puzzling facts f1, f2 . . . fn, if E were asserted, these would all follow, so we regard the facts as [provisionally] providing support for the explanation. And as the body of facts widens, we seek the best of competing, empirically reliable, well-supported explanations. This is an inductive argument, and it is crucial to scientific, forensic, historical and many other contexts of reasoning.
In this context we may see that scientific investigations seek ever more accurate and comprehensive descriptions, set in the context of ever improved explanatory constructs . . . sometimes laws, sometimes models, sometimes theories; though such terms can be frustratingly loose in meanings. Such, should demonstrate empirical reliability through accurate predictive power, but we must recognise this is not establishment of truth beyond correction.
Such considerations provide crucial background for the design inference.
That inference, made in a scientific context, points to observable phenomena such as functionally specific complex organisation and/or associated information, or digitally coded functionally specific information, or fine-tuning. On a base of trillions of observations, once we are beyond a reasonable threshold of complexity [500 – 1,000 bits works] we see that consistently such results from intelligent cause and not from blind chance and/or mechanical necessity. Analysis of search space challenges on the gamut of our observed cosmos or the solar system [our effective universe for chemical level atomic interactions], suggests strongly that the reason for that is, the search challenge is too high for blind forces, but intelligently directed configuration — aka, design — readily achieves such results.
The comments in this thread show many cases in point.
At root, then, the design inference is little more than expressed willingness to trust that base of observations and its analytical context. That is, we see here inference to a general or particular conclusion on tested, empirically reliable sign.
This applies to the world of life, and to features of the observed cosmos.
NWE has a useful summary of the general conclusion, i.e. design theory:
Of course, apart from cosmological design thought tracing to the 1950’s and growing ever since, the modern school of thought began with Thaxton et al in the mid 80’s, then was extended across the 90’s by Dembski, Axe, Behe, Meyer and others. In the past 16 years, it has in fact created a growing body of published research. Often, in the teeth of determined opposition and outright censorship.
However, the core argument is readily accessible.
For instance, anyone who uses the Internet is familiar with coded text strings and the general causal source of such: intelligently directed configuration. Many are familiar with information processing machines that make such codes work. So, when we turn to the world of the living cell and observe similar codes and processing using molecular nanotechnology, the impression of design is overwhelming.
The design inference with a threshold of sufficient complexity that it is maximally unlikely that blind chance and/or mechanical necessity are credible as material cause, follows as a simple induction. And a strongly supported one, we have trillions of cases in point.
To test and overthrow it, it would be necessary to show that forces of blind chance and/or mechanical necessity have sufficed to create such FSCO/I per our observation.
That has never been done and in fact models for origin of cell based life and/or of major body plans have been put forth as reigning orthodoxy in spectacular violation of Newton’s common-sense rules of reasoning. Here, that we should only permit as explanatory constructs regarding things we see as traces of the remote past etc, that are shown to be capable of the like effects here and now. This prevents us from putting up metaphysical speculations without warrant that proposed causes are capable of relevant effects.
How this was done, per fair comment, was through the injection of an exclusionary rule, multiplied by a polarising prejudice.
That is, the suspicion of “the supernatural,” led to the imposition of methodological naturalism which permits only naturalistic causal explanations. So, even though blind chance and/or mechanical necessity have never been actually shown to have power to create FSCO/I beyond 500 – 1,000 bits, that is the only class of explanation allowed. For, “god of the gaps” and “the supernatural” are strictly forbidden and suspect.
(And in context it is no coincidence that the timeline for this seems to be across C19, as VJT supports in the OP above.)
Only, ever since Plato in The Laws bk X, it has been well known that another way to dichotomise causal factors is natural [= blind chance and/or mechanical necessity] vs the ART-ificial [= intelligently directed configuration]. Where, we exemplify but do not exhaust possibilities for intelligent design.
So, we need to start over, from the basics.
KF
PS: Functionally specific, complex organisation can be reduced to information content by seeing configurations as strings of y/n questions in a description language that specifies parts, arrangements and coupling in a functional network. Orgel put this on the table back in 1973.>>