# Logic and First Principles, 2: How could Induction ever work? (Identity and universality in action . . . )

In a day when first principles of reason are at a steep discount, it is unsurprising to see that inductive reasoning is doubted or dismissed in some quarters.

And yet, there is still a huge cultural investment in science, which is generally understood to pivot on inductive reasoning.

Where, as the Stanford Enc of Phil notes, in the modern sense, Induction ” includes all inferential processes that âexpand knowledge in the face of uncertaintyâ (Holland et al. 1986: 1), including abductive inference.” That is, inductive reasoning is argument by more or less credible but not certain support, especially empirical support.

How could it ever work?

A: Surprise — NOT: by being an application of the principle of (stable) distinct identity. (Which is where all of logic seems to begin!)

Let’s refresh our thinking, partitioning World W into A and ~A, W = {A|~A}, so that (physically or conceptually) A is A i/l/o its core defining characteristics, and no x in W is A AND also ~A in the same sense and circumstances, likewise any x in W will be A or else ~A, not neither nor both. That is, once a dichotomy of distinct identity occurs, it has consequences:

Where also, we see how scientific models and theories tie to the body of observations that are explained or predicted, with reliable explanations joining the body of credible but not utterly certain knowledge:

As I argued last time:

>>analogical reasoning [–> which is closely connected to inductive reasoning] âis fundamental to human thoughtâ and analogical arguments reason from certain material and acknowledged similarities (say, g1, g2 . . . gn) between objects of interest, say P and Q to further similarities gp, gp+1 . . . gp+k. Also, observe that analogical argument is here a form of inductive reasoning in the modern sense; by which evidence supports and at critical mass warrants a conclusion as knowledge, but does not entail it with logical necessity.

How can this ever work reliably?

By being an application of the principle of identity.

Where, a given thing, P, is itself in light of core defining characteristics. Where that distinctiveness also embraces commonalities. That is, we see that if P and Q come from a common genus or archetype G, they will share certain common characteristics that belong to entities of type G. Indeed, in computing we here speak of inheritance. Men, mice and whales are all mammals and nurture their young with milk, also being warm-blooded etc. Some mammals lay eggs and some are marsupials, but all are vertebrates, as are fish. Fish and guava trees are based on cells and cells use a common genetic code that has about two dozen dialects. All of these are contingent embodied beings, and are part of a common physical cosmos.

This at once points to how an analogy can be strong (or weak).

For, if G has in it common characteristics {g1, g2 . . . gn, | gp, gp+1 . . . gp+k} then if P and Q instantiate G, despite unique differences they must have to be distinct objects, we can reasonably infer that they will both have the onward characteristics gp, gp+1 . . . gp+k. Of course, this is not a deductive demonstration, at first level it is an invitation to explore and test until we are reasonably, responsibly confident that the inference is reliable. That is the sense in which Darwin reasoned from artificial selection by breeding to natural selection. It works, the onward debate is the limits of selection.>>

Consider the world, in situation S0, where we observe a pattern P. Say, a bright, red painted pendulum swinging in a short arc and having a steady period, even as the swings gradually fade away. (And yes, according to the story, this is where Galileo began.) Would anything be materially different in situation S1, where an otherwise identical bob were bright blue instead? (As in, strip the bob and repaint it.)

“Obviously,” no.

Why “obviously”?

We are intuitively recognising that the colour of paint is not core to the aspect of behaviour we are interested in. A bit more surprising, within reason, the mass of the bob makes little difference to the slight swing case we have in view. Length of suspension does make a difference as would the prevailing gravity field — a pendulum on Mars would have a different period.

Where this points, is that the world has a distinct identity and so we understand that certain things (here comes that archetype G again) will be in common between circumstances Si and Sj. So, we can legitimately reason from P to Q once that obtains. And of course, reliability of behaviour patterns or expectations so far is a part of our observational base.

Avi Sion has an interesting principle of [provisional] uniformity:

>>We might . . . ask â can there be a world without any âuniformitiesâ? A world of universal difference, with no two things the same in any respect whatever is unthinkable. Why? Because to so characterize the world would itself be an appeal to uniformity. A uniformly non-uniform world is a contradiction in terms.

Therefore, we must admit some uniformity to exist in the world.

The world need not be uniform throughout, for the principle of uniformity to apply. It suffices that some uniformity occurs. Given this degree of uniformity, however small, we logically can and must talk about generalization and particularization. There happens to be some âuniformitiesâ; therefore, we have to take them into consideration in our construction of knowledge. The principle of uniformity is thus not a wacky notion, as Hume seems to imply . . . .

The uniformity principle is not a generalization of generalization; it is not a statement guilty of circularity, as some critics contend. So what is it? Simply this: when we come upon some uniformity in our experience or thought, we may readily assume that uniformity to continue onward until and unless we find some evidence or reason that sets a limit to it. Why? Because in such case the assumption of uniformity already has a basis, whereas the contrary assumption of difference has not or not yet been found to have any. The generalization has some justification; whereas the particularization has none at all, it is an arbitrary assertion.

It cannot be argued that we may equally assume the contrary assumption (i.e. the proposed particularization) on the basis that in past events of induction other contrary assumptions have turned out to be true (i.e. for which experiences or reasons have indeed been adduced) â for the simple reason that such a generalization from diverse past inductions is formally excluded by the fact that we know of many cases [of inferred generalisations; try: “we can make mistakes in inductive generalisation . . . “] that have not been found worthy of particularization to date . . . .

If we follow such sober inductive logic, devoid of irrational acts, we can be confident to have the best available conclusions in the present context of knowledge. We generalize when the facts allow it, and particularize when the facts necessitate it. We do not particularize out of context, or generalize against the evidence or when this would give rise to contradictions . . . [Logical and Spiritual Reflections, BK I Hume’s Problems with Induction, Ch 2Â The principle of induction.]>>

So, by strict logic, SOME uniformity must exist in the world, the issue is to confidently identify reliable cases, however provisionally. So, even if it is only that “we can make mistakes in generalisations,” we must rely on inductively identified regularities of the world.Where, this is surprisingly strong, as it is in fact an inductive generalisation. It is also a self-referential claim which brings to bear a whole panoply of logic; as, if it is assumed false, it would in fact have exemplified itself as true. It is an undeniably true claim AND it is arrived at by induction so it shows that induction can lead us to discover conclusions that are undeniably true!

Therefore, at minimum, there must be at least one inductive generalisation which is universally true.

But in fact, the world of Science is a world of so-far successful models, the best of which are reliable enough to put to work in Engineering, on potential risk of being found guilty of tort in court.

Illustrating:

How is such the case? Because, observing the reliability of a principle is itself an observation, which lends confidence in the context of a world that shows a stable identity and a coherent, orderly pattern of behaviour. Or, we may quantify. Suppose an individual observation O1 is 99.9% reliable. Now, multiply observations, each as reliable, the odds that all of these are somehow collectively in a consistent error falls as (1 – p)^n. Convergent, multiplied credibly independent observations are mutually, cumulatively reinforcing, much as how the comparatively short, relatively weak fibres in a rope can be twisted and counter-twisted together to form a long, strong, trustworthy rope.

And yes, this is an analogy.

(If you doubt it, show us why it is not cogent.)

So, we have reason to believe there are uniformities in the world that we may observe in action and credibly albeit provisionally infer to. This is the heart of the sciences.

What about the case of things that are not directly observable, such as the micro-world, historical/forensic events [whodunit?], the remote past of origins?

That is where we are well-advised to rely on the uniformity principle and so also the principle of identity. We would be well-advised to control arbitrary speculation and ideological imposition by insisting that if an event or phenomenon V is to be explained on some cause or process E, the causal mechanism at work C should be something we observe as reliably able to produce the like effect. And yes, this is one of Newton’s Rules.

For relevant example, complex, functionally specific alphanumerical text (language used as messages or as statements of algorithms) has but one known cause, intelligently directed configuration. Where, it can be seen that blind chance and/or mechanical necessity cannot plausibly generate such strings beyond 500 – 1,000 bits of complexity. There just are not enough atoms and time in the observed cosmos to make such a blind needle in haystack search a plausible explanation. The ratio of possible search to possible configurations trends to zero.

So, yes, on its face, DNA in life forms is a sign of intelligently directed configuration as most plausible cause. To overturn this, simply provide a few reliable cases of text of the relevant complexity coming about by blind chance and/or mechanical necessity. Unsurprisingly, random text generation exercises [infinite monkeys theorem] fall far short, giving so far 19 – 24 ASCII characters, far short of the 72 – 143 for the threshold. DNA in the genome is far, far beyond that threshold, by any reasonable measure of functional information content.

Similarly, let us consider the fine tuning challenge.

The laws, parameters and initial circumstances of the cosmos turn out to form a complex mathematical structure, with many factors that seem to be quite specific. Where, mathematics is an exploration of logic model worlds, their structures and quantities. So, we can use the power of computers to “run” alternative cosmologies, with similar laws but varying parameters. Surprise, we seem to be at a deeply isolated operating point for a viable cosmos capable of supporting C-Chemistry, cell-based, aqueous medium, terrestrial planet based life. Equally surprising, our home planet seems to be quire privileged too. And, if we instead posit that there are as yet undiscovered super-laws that force the parameters to a life supporting structure, that then raises the issue, where did such super-laws come from; level-two fine tuning, aka front loading.

From Barnes:

That is, the fine tuning observation is robust.

There is a lot of information caught up in the relevant configurations, and so we are looking again at functionally specific complex organisation and associated information.

(Yes, I commonly abbreviate: FSCO/I. Pick any reasonable index of configuration-sensitive function and of information tied to such specific functionality, that is a secondary debate, where it is not plausible that say the amount of information in DNA and proteins or in the cluster of cosmological factors is extremely low. FSCO/I is also a robust phenomenon, and we have an Internet full of cases in point multiplied by a world of technology amounting to trillions of cases that show that it has just one commonly observed cause, intelligently directed configuration. AKA, design.)

So, induction is reasonable, it is foundational to a world of science and technology.

It also points to certain features of our world of life and the wider world of the physical cosmos being best explained on design, not blind chance and mechanical necessity.

Those are inductively arrived at inferences, but induction is not to be discarded at whim, and there is a relevant body of evidence.

Going forward, can we start from this? END

PS: Per aspect (one after the other) Explanatory Filter, adapting Dembski et al:

## 82 Replies to “Logic and First Principles, 2: How could Induction ever work? (Identity and universality in action . . . )”

1. 1
kairosfocus says:

Logic and First Principles: How could Induction ever work? (Identity and universality in action . . . including on the design inference)

2. 2
jawa says:

Another timely refreshing review of fundamental concepts that are imprescindible for serious discussions. Thanks.

3. 3
kairosfocus says:

Jawa, it seems logic and its first principles are at steep discount nowadays. I have felt strongly impressed that we need to look at key facets of argument which are antecedent to specifics of the case. It turns out that analogy is acknowledged as foundational to reasoning (and so to the warranting of knowledge), and that it is rooted in the principle of identity. Now, we see that induction — which, despite dismissive talk-points to the contrary is fundamental to science — is also similarly rooted. KF

4. 4
EricMH says:

This premise does not seem especially strong:

> A uniformly non-uniform world is a contradiction in terms.

Is the word ‘uniform’ the same in both cases? It seems like if we take this principle to a logical conclusion, then a completely random sequence should contain uniformity that we can generalize from, but that is false.

5. 5
kairosfocus says:

EMH, Avi Sion speaks to the logical import of suggesting a world with no universal uniformities. But on looking again at the suggestion; oops. The lack of universalities [say ~U] is inadvertently self referential and would have to hold across the world. It would be a case of U. Self-contradiction, so U is undeniable, the world necessarily has universal properties. (BTW, this also has nothing to do with generalising on random sequences, which do not exhaust the world; a better candidate is whether apparent laws of nature arrived at by observation of several cases are in fact universal.) So, the onward question is, to identify (at least provisionally) cases of such. In the above, I suggest one: that we may make mistakes with [inductive] generalisations, M. If we try a denial ~M, it is again self-referential and counters itself. This is also a case of arriving at a universal, undeniable property inductively as we know of the possibility of failure through actual cases. Most famously, the breakdown of classical Newtonian Physics from 1879 to 1930 or thereabouts. KF

PS: Let me clip AS: “can there be a world without any âuniformitiesâ? A world of universal difference, with no two things the same in any respect whatever is unthinkable. Why? Because to so characterize the world would itself be an appeal to uniformity. [–> Conclusion:] A uniformly non-uniform world is a contradiction in terms.”

6. 6
EricMH says:

@KF, ok, I think I get it. Sounds like the standard response to relativism, “is it true there is no truth?” showing relativism is self contradicting.

However, it is unclear how this transfers to empirical modeling and prediction. If the world is a random sequence, then any appearance of a ‘law of nature’ is pure happenstance, and cannot be generalized. We thus cannot infer from perceived regularity that the world is not a random sequence, which is Hume’s argument. I don’t see how AS gets us out of the dilemma. For example, if I apply AS principle to a long sequence of random coinflips, then a run of heads is bound to show up. Wouldn’t AS require me to assume the run is a law, and predict the next coin flip will be heads?

The one way I can make sense of this principle is that in a truly random sequence all models are equally useless, so we don’t lose anything by inferring order where there is none. On the other hand, if the sequence is non random, then we do lose out by not inferring order where there is some. So, it is kind of a Pascal’s wager approach to induction. We never actually know to any degree whether induction is valid, and the only way we lose is when it is valid and we assume it is not. But, this does not sound like what AS is saying, since he seems to think there is an absolute law, not a gambler’s wager.

7. 7
kairosfocus says:

EMH, pardon but the world is not a sequence, here we are speaking of observed reality extending across space and time, evidently starting with a bang some 13.85 BYA. Second, it is not plausible that a Turing machine plus driven constructor could build such a cosmos as we observe; it’s not just conceptual models here, it is actual experienced reality. The attempt to deny the legitimacy of generalisation from a finite set of observations, on the so-called pessimistic induction turns on that generalisations have failed in some cases. But, not all, and I put up one not vulnerable to future observations. Its direct answer is, first, that the denial of universalisability runs into logical trouble as outlined. Next, the distinct identity of a world and its content is in part observable and identifiable at least to provisional degree, with significant reliability. So, given that there is evidence of lawlike patterns and that some may indeed be universal, we should not allow ourselves to lose confidence in reliable patterns on the mere abstract possibility that they may be erroneous. In short, science and engineering can be confident. KF

8. 8
EricMH says:

@KF, I agree your analysis works from an intuitive standpoint. I’ve never met a consistent Humean.

But, from the strictly logical standpoint, the case is not so clear to me. However, it is tough to decouple the logical argument from the intuitive argument in these sorts of discussions. That is what I’m trying to do with the coin flip example. How would the AS principle apply to a run of heads in a long sequence of fair coin flips, without a priori knowledge whether the coin is fair or not?

9. 9
kairosfocus says:

EMH, I think the context is scientific induction regarding a real world, not any one phenomenon in it. However, even if coin flips — or better, magnetisation patterns of paramagnetic substances — were utterly 50-50 flat random, they will collectively fit a binomial distribution, which is a level of universally applicable order. BTW, the Quincunx gives an interesting case that trends to the normal curve as an array of rods give a 50-50 R/L split, giving a classic bell. Bias the H/T states so p, (1 – p) are asymmetrical and you get related distributions. This illustrates how it is really hard to avoid having some universally applicable ordering. KF

10. 10
EricMH says:

@KF, hmm, that is a very interesting point. So even in the case of completely uniform randomness there is a generalized pattern. I stand corrected!

11. 11
kairosfocus says:

EMH, statistical thermodynamics is based on the order that emerges from large numbers of randomly interacting particles. For instance, the equilibrium is a cluster of microscopic states consistent with macro-state that has overwhelming statistical weight. The coins or paramagnetic substance example is a fairly common first example and shows how the overwhelming number of possibilities is near 50-50 h/t in no particular order, and that even fairly small fluctuations are hard to observe though not strictly impossible. The overall pattern of possibilities forms a sharply peaked binomial distributions, e.g. the 500 or 1000 coin cases. This ties into the design inference as functionally specific complex configs are maximally implausible under blind chance and/or mechanical necessity, as we have a space of 3.27 * 10^150 or 1.07*10^301 possibilities, respectively. Even were every atom in the sol system [500 bit case] or the observed cosmos [1,000 bit case] an observer with a tray of coins, flipping at random every 10^-12 to -14 s, the fraction of space that could be sampled is negligible. That’s why FSCO/I is not credibly observable on blind chance and/or mechanical necessity. When we look at DNA, which for a genome is well beyond such a range, seeing alphanumeric code so language and algorithms, this therefore screams design. But, too often our senses are ideologically dulled, we are hard of hearing. KF

12. 12
EricMH says:

On further thought, I still think there is a premise missing here.

These ID information measures are essentially some kind of hypothesis test.

We are observing

1) P(X|ID) > P(X|chance),

and then inferring

2) P(ID|X) > P(chance|X).

What principle, besides common sense, gets us from #1 to #2? This seems to be necessary for induction to work, but I do not see how AS uniformity argument would apply here.

13. 13
kairosfocus says:

EMH

First, induction is much bigger than ID, and it is at the core of a lot of reasoning in science and the day to day world. Given some common ideologies out there, it is induction we first need to address.

Next, Induction is not deduction, not even in a probabilistic sense. And most of it is not about Bayesian or Likelihood inference. Induction is not statistics. Induction is about reasonable, responsible inference on empirical evidence. Argument by support rather than deduction.

Which latter then runs into, how do you set up the premises. As in, if you have P => Q, and you don’t like Q, reverse: ~Q => ~P. That then exposes the real debate: premises.

I usually use abduction as relevant frame of such arguments. Observations F1, F2, . . Fn seem puzzling but some explanation E entails them. It predicts R1, R2 . . . and we see it being consistently correct. Then we make what is in deductive terms a logically fallacious move: accept E as reliable and credibly known (at least, provisionally).

Two things are going on, first it is an empirical observation, call it S, that E is reliable. It is a candidate universal (in a scientific context).

Reliable is not the same as definitively true.

But at second level, we have a conviction the world has universal properties and that some are identifiable on investigation, observation, analysis. So, when we see something that is consistently reliable, we accept it provisionally, open to correction. This reflects a weak, potentially defeatable form sense of knowledge — well warranted, reliable, credibly true.

Reasonable, responsible faith.

Then, what about the issue that any number of possible explanations could entail what we see? First, if we have in hand a cluster of candidates E1, E2, . . . Em, then we see which is best so far. If say Ei and Ej are “equally good” then we accept them as empirically equivalent. We may look at simplicity, coherence, not being simplistic or ad hoc etc.

One of the key tests is coherence with wider bodies of knowledge. Though, that can be overdone. For example, a common controlling a priori can bias across several fields of study.

Much of this is descriptive, summarising effective praxis.

Induction simply is not generally going to deliver utter certainty. So, we learn to live with a chastened view of our body of knowledge, attainable warrant and the balance between confident trust and openness to adjustment or replacement.

As has been on the table since Newton, Locke and company.

KF

14. 14
EricMH says:

What you are describing sounds pretty similar to CSI. Your scheme sounds equivalent to saying we pick the viable explanation that maximizes CSI.

So, from my previous formulation, what gets us from #1 to #2 is the premise that a viable explanation that maximizes a posteriori probability is the best.

It looks like the viability requirement is the same as Dembski’s detachability requirement, plus some other requirements such as parsimony and consistency.

I think this makes good sense, and is mathematically well founded in probability.

I will ponder how this addresses the Bayesian problem:

The missing term in my formulation is P(ID) and P(chance). Given that P(x|ID) > P(x|chance), the conclusion that P(ID|x) > P(chance|x) only follows if P(x|ID) * P(ID) > P(x|chance) * P(chance).

A materialist must insist that a priori P(chance) > P(ID), so that P(x|ID) > P(x|chance) illustrates is there are some missing materialistic probability resources they have not identified yet, hence the multiverse hypothesis.

I will have to think about how your formulation gives a principled response to the materialist.

15. 15
kairosfocus says:

EMH,

the design inference on complex [functionally] specified information is an inductive inference in the abductive form. That is brought out through the design inference explanatory filter (especially in the per aspect form that I will append to the OP).

Similarly, Bayesian inference probability revision (and more importantly its extension into likelihood reasoning [an in-page in my always linked briefing note]) is again abductive, where in the latter case relative likelihood of alternative hypotheses is on the table. This is of course essentially a statistical study tied into bayesian subjective probabilities and thence wider probability theory. Which then often pointlessly bogs down in debates over defining and estimating probabilities, in relevant contexts.

But inductive reasoning is much broader than Bayesian reasoning as extended into likelihoods etc.

Indeed, it comes from a different world of thought. One, where we have to credibly account for how people have reasoned empirically and fallibly but reliably enough to build civilisations including engineering disciplines, science and technology, management and academic disciplines such as history for many thousands of years. Something that is pretty messy but vital.

Especially as we know that inductive reasoning cannot reduce to deductive reasoning. For simple example, say, we have a hyp H that explains observed or in-hand “facts” F1, F2 . . . Fm and predicts Pn+1, Pn+2 . . . Pn, i.e, n being now.

But, to infer:

H => {F + P}

{F + P} is so,

Therefore, H

. . . is fallacious, affirming the consequent. In effect implication is not to be confused with equivalence aka double implication. Simplistically, If Tom is a cat then Tom is a mammal does not sustain that if Tom is a mammal Tom must be a cat. Ask any boy named Tom.

So, how is induction sustained as a responsible argument?

Especially, given that for most of history and even now for most real world affairs, Bayesian reasoning simply has not been on the table.

In short, how can one erect credibly reliable albeit inherently fallible [thus, in principle provisional] support for conclusions? Sufficiently reliable to stand up in a Tort case, to risk considerable sums of money on [building buildings or bridges or ships or company business models], or to trust with one’s life [medical treatment, vehicles, ships, aircraft, weapons . . . think of how a Katana is made following a highly complex, trade secret based traditional recipe handed down across generations], etc?

That is a far deeper challenge.

As I pointed out in the OP:

In a day when first principles of reason are at a steep discount, it is unsurprising to see that inductive reasoning is doubted or dismissed in some quarters.

And yet, there is still a huge cultural investment in science, which is generally understood to pivot on inductive reasoning.

Where, as the Stanford Enc of Phil notes, in the modern sense, Induction â includes all inferential processes that âexpand knowledge in the face of uncertaintyâ (Holland et al. 1986: 1), including abductive inference.â That is, inductive reasoning is argument by more or less credible but not certain support, especially empirical support.

How could it ever work?

A: Surprise â NOT: by being an application of the principle of (stable) distinct identity. (Which is where all of logic seems to begin!) . . . .

Consider the world, in situation S0, where we observe a pattern P. Say, a bright, red painted pendulum swinging in a short arc and having a steady period, even as the swings gradually fade away. (And yes, according to the story, this is where Galileo began.) Would anything be materially different in situation S1, where an otherwise identical bob were bright blue instead? (As in, strip the bob and repaint it.)

âObviously,â no.

Why âobviouslyâ?

We are intuitively recognising that the colour of paint is not core to the aspect of behaviour we are interested in. A bit more surprising, within reason, the mass of the bob makes little difference to the slight swing case we have in view. Length of suspension does make a difference as would the prevailing gravity field â a pendulum on Mars would have a different period.

Where this points, is that the world has a distinct identity and so we understand that certain things (here comes that archetype G again [–> which holds a cluster of in-common, i.e. universal characteristics that here would manifest as reliable regularities]) will be in common between circumstances Si and Sj. So, we can legitimately reason from P [–> case or cases Si together with inferred plausible, so far best explanation Ebi] to Q [–> a future situation Sj to be managed or interacted with i/l/o Ebi] once that obtains. And of course, reliability of behaviour patterns or expectations so far is a part of our observational base.

I have of course augmented slightly.

This is really descriptive so far, we do this on the assumption that there are reliable stable [= universalisable] and intelligible characteristics of the world.

How could this ever work or at least plausibly be acceptable?

That’s where Avi Sion’s observation becomes crucial.

To assume for argument that there are no such universal properties, ~U, leads immediately to the insight, that [~U] would itself inadvertently exemplify a universal property. That is the attempt to deny U instantly confirms it as undeniably and indeed self-evidently true. Universal, stable properties are inevitable characteristics of a world.

But are such amenable to our minds? Are they intelligible or an inscrutable enigma?

This can be answered through a case in point.

Following Josiah Royce and Elton Trueblood [who reminded us of Royce’s half-forgotten work] we can see that “error exists” is an undeniable truth. But also, we vividly recall class work, say, elementary school sums that came back full of big red X’s. (When I was in the classroom, I insisted on using green ink, having been traumatised in my own day.)

So, we can see a candidate: we may make errors in inductive generalisations, call this M. Try the denial ~M. Instantly on well known cases we know that ~M is actually false and we can see from the logic of inference to so far “best” explanation that such fallibility is locked into the logic. We do not have here a case of true premises and strictly deductive consequences guaranteeing the truth of conclusions.

Thus, we have a case of a certainly known undeniably true universal property accessed through inductive generalisation, M.

That may have been established on a trivial case [and is backed by a survey of the history of science], but it is a powerful result: there are intelligible, universal properties of the world. And, we also know that we may establish reliable, plausible but provisional generalisations or explanations through empirical investigations and linked reasoning. Equally, on relevant history of science and other disciplines.

We also know from modelling theory that a strictly false model may be highly reliable in a defined domain of testing. This gives us confidence to trust that the stability of the actual properties of the world will sustain models we can use to confidently design, build and act on reliable but provisional, weak form knowledge.

(Strong form knowledge is not merely reliable and credible or plausible but actually true . . . a very hard to meet, quite restrictive requirement. Not even complex axiomatic mathematical systems, post Godel, meet that standard. My own confidence in Math in material part rests on a large body of fact, demonstrated reliability and equally demonstrated coherence across domains such as in the Euler expression 0 = 1 + e^i*pi, etc.)

As for multiverse hyp, this is injection of an unobserved and likely unobservable entity, and it inadvertently crosses over into speculative metaphysics rather than science. Philosophy done in a lab coat is still philosophy and it must answer to comparative difficulties across competing world views. Or else, we have grand question-begging.

Where, evolutionary materialistic scientism is manifestly self-referentially incoherent. J B S Haldane long since put his finger on what we can elaborate as a core problem:

“It seems to me immensely unlikely that mind is a mere by-product of matter. For if my mental processes are determined wholly by the motions of atoms in my brain I have no reason to suppose that my beliefs are true. They may be sound chemically, but that does not make them sound logically. And hence I have no reason for supposing my brain to be composed of atoms. In order to escape from this necessity of sawing away the branch on which I am sitting, so to speak, I am compelled to believe that mind is not wholly conditioned by matter.â [“When I am dead,” in Possible Worlds: And Other Essays [1927], Chatto and Windus: London, 1932, reprint, p.209. (NB: DI Fellow, Nancy Pearcey brings this right up to date (HT: ENV) in a current book, Finding Truth.)]

Materialism is non-viable, though those caught up in the system don’t tend to see that clearly. Exposing the incoherence and asking, first justify yourself being significantly free thus able to observe and reason accurately and responsibly so your arguments have legs to stand on is an important move in breaking the spell of materialism dressed up in the lab coat and presuming it has cornered the market on rationality.

The evasiveness or cornered rat lashing out will soon enough reveal the problem.

I should note that Dembski has set his work in the context of abductive reasoning.

The big, civilisation level question is induction.

KF

PS: Let me clip my discussion on Bayesian inference (the linked has colour-coded elements that help to clarify). Of course, we should not get bogged down on so specific an issue given the much broader question on the table:

We often wish to find evidence to support a theory, where it is usually easier to show that the theory [if it were for the moment assumed true] would make the observed evidence âlikely” to be so [on whatever scale of weighting subjective/epistemological “probabilities” we may wish etc . . .].

So in effect we have to move: from p[E|T] to p[T|E], i.e from “probability of evidence given theory” to “probability of theory given evidence,” which last is what we can see. (Notice also how easily the former expression p[E|T] “invites” the common objection that design thinkers are “improperly” assuming an agent at work ahead of looking at the evidence, to infer to design. Not so, but why takes a little explanation.)

Let us therefore take a quick look at the algebra of Bayesian probability revision and its inference to a measure of relative support of competing hypotheses provided by evidence:

a] First, look at p[A|B] as the ratio, (fraction of the time we would expect/observe A AND B to jointly occur)/(fraction of the the time B occurs in the POPULATION).

–> That is, for ease of understanding in this discussion, I am simply using the easiest interpretation of probabilities to follow, the frequentist view.

b] Thus, per definition given at a] above:

p[A|B] = p[A AND B]/p[B],

or, p[A AND B] = p[A|B] * p[B]

c] By âsymmetry,” we see that also:

p[B AND A] = p[B|A] * p[A],

where the two joint probabilities (in green) are plainly the same, so:

p[A|B] * p[B] = p[B|A] * p[A],

which rearranges to . . .

d] Bayesâ Theorem, classic form:

p[A|B] = (p[B|A] * p[A]) / p[B]

e] Substituting, E = A, T = B, E being evidence and T theory:

p[E|T] = (p[T|E] * p[E])/ p[T],

p[T|E] — probability of theory (i.e. hypothesis or model) given evidence seen — being here by initial simple “definition,” turned into L[E|T]:

L[E|T] is (by definition) the likelihood of theory T being “responsible” for what we observe, given observed evidence E [NB: note the “reversal” of how the “|” is being read]; at least, up to some constant. (Cf. here, here, here, here and here for a helpfully clear and relatively simple intro. A key point is that likelihoods allow us to estimate the most likely value of variable parameters that create a spectrum of alternative probability distributions that could account for the evidence: i.e. to estimate the maximum likelihood values of the parameters; in effect by using the calculus to find the turning point of the resulting curve. But, that in turn implies that we have an “agreed” model and underlying context for such variable probabilities.)

Thus, we come to a deeper challenge: where do we get agreed models/values of p[E] and p[T] from?

This is a hard problem with no objective consensus answers, in too many cases. (In short, if there is no handy commonly accepted underlying model, we may be looking at a political dust-up in the relevant institutions.)

f] This leads to the relevance of the point that we may define a certain ratio,

LAMBDA = L[E|h2]/L[E|h1],

This ratio is a measure of the degree to which the evidence supports one or the other of competing hyps h2 and h1. (That is, it is a measure of relative rather than absolute support. Onward, as just noted, under certain circumstances we may look for hyps that make the data observed “most likely” through estimating the maximum of the likelihood function — or more likely its logarithm — across relevant variable parameters in the relevant sets of hypotheses. But we don’t need all that for this case.)

g] Now, by substitution A –> E, B –> T1 or T2 as relevant:

p[E|T1] = p[T1|E]* p[E]/p[T1],

and

p[E|T2] = p[T2|E]* p[E]/p[T2];

so also, the ratio:

p[E|T2]/ p[E|T1]

= {p[T2|E] * p[E]/p[T2]}/ {p[T1|E] * p[E]/p[T1]}

= {p[T2|E] /p[T2]}/ {p[T1|E] /p[T1]}

h] Thus, rearranging:

p[T2|E]/p[T1|E]

= {p[E|T2]/ p[E|T1]} * {P(T1)/P(T2)}

i] So, substituting:

L[E|T2]/ L[E|T1] = LAMBDA

= {p[E|T2]/ p[E|T1]} * {P(T2)/P(T1)}

Thus, the lambda measure of the degree to which the evidence supports one or the other of competing hyps T2 and T1, is a ratio of the conditional probabilities of the evidence given the theories (which of course invites the “assuming the theory” objection, as already noted), times the ratio of the probabilities of the theories being so. [In short if we have relevant information we can move from probabilities of evidence given theories to in effect relative probabilities of theories given evidence, and in light of an agreed underlying model.]

All of this is fine as a matter of algebra (and onward, calculus) applied to probability, but it confronts us with the issue that we have to find the outright credible real world probabilities of T1, and T2 (or onward, of the underlying model that generates a range of possible parameter values). In some cases we can get that, in others, we cannot; but at least, we have eliminated p[E]. Then, too, what is credible to one may not at all be so to another. This brings us back to the problem of selective hyperskepticism, and possible endless spinning out of — too often specious or irrelevant but distracting — objections [i.e closed minded objectionism].

Now, by contrast the âelimination” approach rests on the well known, easily observed principle of the valid form of the layman’s “law of averages.” Namely, that in a “sufficiently” and “realistically” large [i.e. not so large that it is unable or very unlikely to be instantiated] sample, wide fluctuations from “typical” values characteristic of predominant clusters, are very rarely observed. [For instance, if one tosses a “fair” coin 500 times, it is most unlikely that one would by chance go far from a 50-50 split that would be in no apparent order. So if the observed pattern turns out to be ASCII code for a message or to be nearly all-heads or alternating heads and tails, or the like, then it is most likely NOT to have been by chance. (See, also, Joe Czapski’s “Law of Chance” tables, here.)]

Elimination therefore looks at a credible chance hyp and the reasonable distribution across possible outcomes it would give [or more broadly the “space” of possible configurations and the relative frequencies of relevant “clusters” of individual outcomes in it]; something we are often comfortable in doing. Then, we look at the actual observed evidence in hand, and in certain cases — e.g. Caputo — we see it is simply too extreme relative to such a chance hyp, per probabilistic resource exhaustion.

So the material consequence follows: when we can âsimply” specify a cluster of outcomes of interest in a configuration space, and such a space is sufficiently large that a reasonable random search will be maximally unlikely within available probabilistic/ search resources, to reach the cluster, we have good reason to believe that if the actual outcome is in that cluster, it was by agency. [Thus the telling force of Sir Fred Hoyleâs celebrated illustration of the utter improbability of a tornado passing through a junkyard and assembling a 747 by chance. By far and away, most of the accessible configurations of the relevant parts will most emphatically be unflyable. So, if we are in a flyable configuration, that is most likely by intent and intelligently directed action, not chance. ]

We therefore see why the Fisherian, eliminationist approach makes good sense even though it does not so neatly line up with the algebra and calculus of probability as would a likelihood or full Bayesian type approach. Thence, we see why the Dembski-style explanatory filter can be so effective, too.

16. 16
EricMH says:

Thanks KF, that is a very detailed exposition on the topic, and I will continue thinking it over.

While you are right there is always some general pattern, even in a random world, I do not yet follow your example of how error allows us to infer the general pattern. It sounds a bit like Popper’s falsificationism, which is logically flawed per the fair coin example.

My current thought is to combine maximum entropy with Bayesian reasoning to eliminate the P(T1)/P(T2) term, since maximum entropy sets the term to 1. But I’m sure there is a problem with that approach.

17. 17
kairosfocus says:

JMH

Let’s go back through 15:

To assume for argument that there are no such universal properties, ~U, leads immediately to the insight, that [~U] would itself inadvertently exemplify a universal property. That is the attempt to deny U instantly confirms it as undeniably and indeed self-evidently true. Universal, stable properties are inevitable characteristics of a world.

But are such amenable to our minds? Are they intelligible or an inscrutable enigma?

This can be answered through a case in point.

Following Josiah Royce and Elton Trueblood [who reminded us of Royceâs half-forgotten work] we can see that âerror existsâ is an undeniable truth. But also, we vividly recall class work, say, elementary school sums that came back full of big red Xâs. (When I was in the classroom, I insisted on using green ink, having been traumatised in my own day.)

So, we can see a candidate: we may make errors in inductive generalisations, call this M. Try the denial ~M. Instantly on well known cases we know that ~M is actually false and we can see from the logic of inference to so far âbestâ explanation that such fallibility is locked into the logic.

[–> That is, it is first, part of our factual background that we do as a matter of experience make errors in inductive generalisation. One countervailing fact wrecks a universal claim. Also, we know from the logic of explanations, that the implication is not an equivalence so the support provided by observations so far is fallible, i.e. there is a possibility of error. Which is enough again.]

. . . We do not have here a case of true premises and strictly deductive consequences guaranteeing the truth of conclusions.

Thus, we have a case of a certainly known undeniably true universal property accessed through inductive generalisation, M.

[–> The mere logic establishes the possibility of error, and we know the fact of error from experience also.]

That may have been established on a trivial case [and is backed by a survey of the history of science], but it is a powerful result: there are intelligible, universal properties of the world. And, we also know that we may establish reliable, plausible but provisional generalisations or explanations through empirical investigations and linked reasoning. Equally, on relevant history of science and other disciplines.

On the Likelihood analysis, yes if we are indifferent across T1 and T2 as to which is more likely, the subjective probability ratio goes to 1, as of two options that are under indifference, 0.5/0.5 = 1.

But that means we have no basis for choosing one over the other, they are so far empirically equivalent.

Also, they would have to be very carefully chosen to be exhaustive of possibilities or P(T1) + P(T2) would sum to less than 1, maybe to a large degree and one that is unknown.

KF

18. 18
EricMH says:

Are you saying that if M makes a prediction error then we can inductively infer the general observation that M is false? I also request shorter words and sentences. I have but a small brain that finds it difficult to grasp abstract concepts.

Equally weighting the priors means that P(x|A)/P(x|B) = P(A|x)/P(B|x). So, if A makes x more likely, then A is the better theory. This seems to be the hidden premise behind the ID inference to design.

19. 19
Bob O'H says:

As you’re very much strayed onto my territory, a couple of comments:

1. From a frequentist (or fiducial) standpoint, LAMBDA is a likelihood ratio
2. For a Bayesian, LAMBDA is a Bayes factor.

Botha are used as measures of evidence. The difference is how other unknowns are treated: the frequentist maximises over them, the Bayesian marginalises.

It’s well known (at least in statistical circles) that the Bayes Factor is sensitive to the priors of the parameters.

I think one can interpret a Bayesian approach as induction under uncertainty. It allows us to update our knowledge as new observations come in, but because it assumes uncertainty in the observations and the process, it’s more flexible (essentially, as long as the models says that it is possible for black swans to exist, we don’t panic when we see one). But if we make an observation that has zero prior probability, then we do panic.

It’s also worth noting that Bayesian mathematics only works when the assumed models are true. Which is unfortunate as we all know that all models are wrong. Evidently, they are not wrong enough to still be useful.

20. 20
EricMH says:

@Bob O’H what are your thoughts on equally weighting the priors?

And, what does the occurrence of an event with zero prior probability mean? Does it mean the hypothesis is flat out wrong and another is needed?

21. 21
EricMH says:

Ah, actually ID is not positing equal priors. The probabilistic resources is the prior for the chance hypothesis. Then, CSI is the probabilistic deficiency that is not accounted for.

22. 22
Bob O'H says:

Equally weighting which priors? In the BF the prior for the models doesn’t appear, and for the parameters of each model it’s difficult to see (in general) how any equal weighting could be done: the models could have very different parameter space (as they do in design vs evolution, for example).

A zero probability event would be one that was considered impossible. As this is subjective, it means that it could be right, but that whoever assigned the probability thought it flat out wrong.

23. 23
kairosfocus says:

EMH, look up active information. KF

24. 24
EricMH says:

KF, my apologies. My last comment came across as rude, but I did not intend it as such. I will relook over all the items you have written here and active information, and I appreciate what was a lot of writing on your part. Many thanks for all the work you do here.

25. 25
kairosfocus says:

BO’H: Yes, and the outline above (and as is linked to more where I clipped from in my briefing note) lays out the algebra and consequences i/l/o earlier discussions. P(E) is eliminated in the algebra but the lock-up to P(T1) and P(T2) locks into a beauty contest where unknowns can walk on the stage and steal the show. Unless T1 and T2 exhaust effective options, evaluating their probability is a lottery. In science (esp on origins) that means either broadening the T’s ridiculously to the point of vagueness or else locking in arbitrary probably ideologically loaded narrow horizons. A more useful answer in my view is to in the context of FSCO/I, go to stat thermo-D and consider search challenge on relevant config spaces i/l/o atomic-temporal resources. The import is plain, past 500 – 1000 bits worth, FSCO/I is essentially certainly by design. R/DNA in the cell easily surpasses that, showing also language in action. Life depends on language, pointing to design. But that is so often ideologically utterly strictly forbidden by the new materialist magisterium dressed up in lab coats. KF

PS: If evidence of LANGUAGE as alphanumeric code in the cell is unconvincing, I suspect such a one will hardly be convinced by any empirical evidence. See Lewontin’s cat out of the bag remark on a priori materialism. Where, such is self refuting for a person needing rational freedom to argue.

26. 26
kairosfocus says:

EMH, sorry, rushed y/day so I just pointed. Active info is interesting. Rushed again gotta go catch a ferry for a day trip, returning from ANU the coming evening. KF

27. 27
Bob O'H says:

kf @ 25 – one of the reason I like the Bayesian approach is that it makes everything explicit, so it’s easier to see what formal assumptions you are making, and thus makes it easier to see where you have to fudge things (and at some point you have to fudge things: all models are, after all, wrong). Which makes this worrisome:

Unless T1 and T2 exhaust effective options, evaluating their probability is a lottery. In science (esp on origins) that means either broadening the Tâs ridiculously to the point of vagueness or else locking in arbitrary probably ideologically loaded narrow horizons.

You realise that doing the modelling properly is going to involve a lot of fudging. TBH, I’m fine with this, because you can make it clear what bits are vague or arbitrary, so these can then be refined.

If “stat thermo-D” is genuinely a useful alternative approach, then it should, I think, converge to a proper analysis (or at least the differences should be identifiable). Essentially, you would be making different fudges. I think this would be fine as long as it’s clear (again) what fudges are being made.

28. 28
kairosfocus says:

BO’H:

I am speaking to the issue of exploring large configuration spaces by blind search which is based on chance and/or necessity.

Let me cite Walker and Davies to give some context:

In physics, particularly in statistical mechanics, we base many of our calculations on the assumption of metric transitivity, which asserts that a systemâs trajectory will eventually [–> given “enough time and search resources”] explore the entirety of its state space â thus everything that is phys-ically possible will eventually happen. It should then be trivially true that one could choose an arbitrary âfinal stateâ (e.g., a living organism) and âexplainâ it by evolving the system backwards in time choosing an appropriate state at some âstartâ time t_0 (fine-tuning the initial state). In the case of a chaotic system the initial state must be specified to arbitrarily high precision. But this account amounts to no more than saying that the world is as it is because it was as it was, and our current narrative therefore scarcely constitutes an explanation in the true scientific sense.

We are left in a bit of a conundrum with respect to the problem of specifying the initial conditions necessary to explain our world. A key point is that if we require specialness in our initial state (such that we observe the current state of the world and not any other state) metric transitivity cannot hold true, as it blurs any dependency on initial conditions â that is, it makes little sense for us to single out any particular state as special by calling it the âinitialâ state. If we instead relax the assumption of metric transitivity (which seems more realistic for many real world physical systems â including life), then our phase space will consist of isolated pocket regions and it is not necessarily possible to get to any other physically possible state (see e.g. Fig. 1 for a cellular automata example).

[–> or, there may not be “enough” time and/or resources for the relevant exploration, i.e. we see the 500 – 1,000 bit complexity threshold at work vs 10^57 – 10^80 atoms with fast rxn rates at about 10^-13 to 10^-15 s leading to inability to explore more than a vanishingly small fraction on the gamut of Sol system or observed cosmos . . . the only actually, credibly observed cosmos]

Thus the initial state must be tuned to be in the region of phase space in which we find ourselves [–> notice, fine tuning], and there are regions of the configuration space our physical universe would be excluded from accessing, even if those states may be equally consistent and permissible under the microscopic laws of physics (starting from a different initial state). Thus according to the standard picture, we require special initial conditions to explain the complexity of the world, but also have a sense that we should not be on a particularly special trajectory to get here (or anywhere else) as it would be a sign of fineâtuning of the initial conditions. [ –> notice, the “loading”] Stated most simply, a potential problem with the way we currently formulate physics is that you canât necessarily get everywhere from anywhere (see Walker [31] for discussion). [“The âHard Problemâ of Life,” June 23, 2016, a discussion by Sara Imari Walker and Paul C.W. Davies at Arxiv.]

That points to a fairly wide context.

KF

29. 29
kairosfocus says:

PS: I am pretty well convinced that while Bayesian approaches can work in certain contexts as a statistical investigation (where it can be pretty effective), statistics has not swallowed up inductive logic. That is the key problem, and I spoke to it in the OP.

30. 30
Bob O'H says:

Isn’t inductive logic just a simplified Bayesian model where probabilities can only be 0 or 1?

31. 31
kairosfocus says:

BO’H: Inductive reasoning is reasoning on typically empirical support that is generally fallible and/or uncertain but ideally should be reliable. It needs not be quantified (though we can argue that better/worse support could be seen as forming a ranking scale in some cases) and it certainly is not as a rule set on a binary, discrete scale of probabilities 0/1. KF

32. 32
kairosfocus says:

PS: Typed while rocking on a ferry heading back into MNI.

33. 33
Bob O'H says:

I thought inductive reasoning said something was either true (if there is no counter-example) or false (if there is a counter example). How is that not binary?

34. 34
kairosfocus says:

F/N: I find an interesting categorisation of inductive argument types:

http://www.thelogiccafe.net/logic/ref3.pdf

Let me throw the ball back into play from this:

1. Causal Reasoning: The example of Chris-in-love is inference to a cause. The best — but not the only — interpretation of the data about Chris is that he is in love. Still, attributing causation can be very difficult . . . [However] Often times there are correlations between types of events but no causal link.

–> This raises the implication that we reason against a background context providing a plausibly reliable world model which has core elements regarded as practically certain and others that are more speculative-hypothetical

–> This raises issues similar to Lakatos on how an auxiliary belt provides sacrificial protection for core commitments and that core + belt are in play at all times when things are tested.

2. Argument from Authority: Very often are best reasons for believing something is expert testimony. Smoking causes cancer. I believe this but have never done the study. The experts tell us this is so: they do the causal reasoning and we reason they are right based on their expertise. Still, once was the time when the tobacco industry paid “experts” to testify that there was no causal link but just a correlation. One needs to be careful to make sure that
spokespersons cited as authorities truly do know the field of knowledge in question and are in a position to wisely judge, and
there are not other equally good authorities taking an opposed position.
Like all inductive arguments, those from authority offer no guarantee that their conclusion is true. But, if the authority cited is a good one, and there is no other evidence to the contrary, then the conclusion is likely true.

–> 99+% of practical arguments rely or build on authority so we had better lean how to manage it prudently (ponder dictionarlies, textbooks and reference works, teachers and schools, accounting system journal entries etc)

3. Generalization: One of the most common sorts of inductive argument is from particular cases to a more universal statement about all members of a group . . . When one generalizes from a “sample”, one needs to be careful that the sample is a good representation of the whole group. (So, we ask for a “representative sample” when generalizing.) . . . .

–> Implies an appeal to in-common relevant properties, cf OP on archetypes

4. Statistical Generalization: Sometimes the generalization is not universal. Instead of saying “everyone finds 4.5 difficult”, one might conclude that most people do. A more
sophisticated sampling, e.g., in election polling, will sample from a big group and conclude that x% of voters will vote for y . . . .

–> attempts to sample or represent populations and identify patterns based on sampling theory etc

5. Statistical Inference: This sort of reasoning moves from evidence about a group, often a very large group, to a conclusion about an individual or another group. Often the groups are explicitly described in statistical terms: “90% of my group got an A” or “most US citizens distrust tyranny”. Two important types of statistical inference are treated separately below: Arguments from Analogy and Predictions. In all cases of statistical inference, generalizations about groups are applied to make conclusions about particular individuals or particular groups of individuals. Such reasoning is the reverse of generalization . . . .

6. Prediction: From information about what has happened in times past, we make an inference to the future. So, predictions are a type of statistical inference.

–> Predictive power thus demonstrated repeated reliability builds confidence in a claim or model

7. Argument by Analogy: Attempts to show a conclusion that some thing X has a quality q given that similar things Y, Z, etc. all have this same quality q . . .

–> “similar” is a tell-tale on having an in common class, with archetype

KF

35. 35
kairosfocus says:

BO’H:

pardon, but that is precisely what inductive reasoning (which is explicitly about uncertain but more or less credibly reliable inference that if strong enough builds knowledge in the defeatable but reliable so far sense) is not.

A counter example can discredit a generalisation but in elaborate contexts as Lakatos highlighted, a core is shielded by auxiliary and sacrificial elements where both are always tested together. It therefore takes a lot to break a core paradigm commitment.

Which may be more philosophical than scientific, e.g. a priori evolutionary materialistic scientism. Which is self-refuting and necessarily false but institutionally deeply entrenched.

Lakatos also pointed out that major hyps are born “refuted,” live refuted and die refuted. That is, there are anomalies addressed through puzzle-solving which may grow into a crisis and revolution if a rival school gains enough support.

In this sort of world, we have to address plausibility, reliability, weak form knowledge, inference to best current explanation and the power of explicitly false [simplified is frankly euphemistic] models to guide inference, analysis and design etc. Where given tort, you get it badly wrong, you can get sued. And you may have to live with your haunting ghosts even if you win.

Inductive reasoning is messy, hard to categorise, often inescapably subjective and only sometimes amenable to neat statistical-probabilistic frameworks.

I suggest, especially for more or less elaborate explanatory contexts, that the inference to best current explanation abductive model is a best approach. And yes, this too is an explanatory construct.

KF

36. 36
john_a_designer says:

What kind of knowledge are the laws of nature? Are they deductively derived or inductively derived? I would say the latter. I would say they have been derived by observing and experimenting on physical phenomena which we encounter in the world around us. For some reason we are so sure of these so-called laws and constants that we assume that they are universal and unchanging, and can be used in premises which then can be used to make theoretical deductions about other questions we have concerning the physical universe, including questions that are based on phenomena which are presently invisible to us– like dark matter and energy.

But are we really warranted in believing that the laws and constants of universe are universal and unchanging? Can we prove that they are or must we take them to be true as a matter of faith?

37. 37
kairosfocus says:

F/N: Newton weighed in in Opticks Query 31:

As in Mathematicks, so in Natural Philosophy, the Investigation of difficult Things by the Method of Analysis, ought ever to precede the Method of Composition. This Analysis consists in making Experiments and Observations, and in drawing general Conclusions from them by Induction, and admitting of no Objections against the Conclusions, but such as are taken from Experiments, or other certain Truths. For [speculative, empirically ungrounded] Hypotheses are not to be regarded in experimental Philosophy. And although the arguing from Experiments and Observations by Induction be no Demonstration of general Conclusions; yet it is the best way of arguing which the Nature of Things admits of, and may be looked upon as so much the stronger, by how much the Induction is more general. And if no Exception occur from Phaenomena, the Conclusion may be pronounced generally. But if at any time afterwards any Exception shall occur from Experiments, it may then begin to be pronounced with such Exceptions as occur. [–> this for instance speaks to how Newtonian Dynamics works well for the large, slow moving bodies case, but is now limited by relativity and quantum findings] By this way of Analysis we may proceed from Compounds to Ingredients, and from Motions to the Forces producing them; and in general, from Effects to their Causes, and from particular Causes to more general ones, till the Argument end in the most general. This is the Method of Analysis: And the Synthesis consists in assuming the Causes discover’d, and establish’d as Principles, and by them explaining the Phaenomena proceeding from them, and proving [= testing, the older sense of “prove” . . . i.e. he anticipates Lakatos on progressive vs degenerative research programmes and the pivotal importance of predictive success of the dynamic models in our theories in establishing empirical reliability, thus trustworthiness and utility] the Explanations. [Newton in Opticks, 1704, Query 31, emphases and notes added]

KF

38. 38
john_a_designer says:

Karl Popper used the discovery of black swans in Australia, soon after Europeans arrived there, to illustrate the problems and shortcomings of inductive logic. However, there have been some other discoveries in science which are like the discovery of black swans. Here are a couple of examples:

Scientists had always believed that noble gases, also known as inert or rare gases, were chemically unable to react. Helium, neon, argon, krypton, xenon, and radon (all gases at room temperature) were viewed as the “loners” of the Periodic Table. Their inertness became a basic tenet of chemistry, published in textbooks and taught in classrooms throughout the world.

In other words, noble gases could not form chemical compounds. Indeed that is what I was taught as fact in my H.S. chemistry course in the 1960âs. And there was good reason to believe that this was irrefutable or âsettled science.â

Conventional scientific wisdom held that the noble gas elements could not form compounds because their electronic structure was extremely stable. For all except helium, the maximum capacity of the outer electron shell of the noble gas atom is eight electrons. For helium, that limit is just two electrons. These electron arrangements are especially stable, leaving the noble gases without a tendency to gain or lose electrons. This led chemists to think of them as totally unreactive.

Or in other words, this view was the scientific consensus– the OVERWHELMING consensus.

Except it wasnât true. In 1962 Neil Bartlett, âwho was teaching chemistry at the University of British Columbia in Vancouver, Canada,â succeeded in creating a compound that used xenon as one of its chemical components.

He was certain that the orange-yellow solid was the world’s first noble gas compound. But convincing others would prove somewhat difficult. The prevailing attitude was that no scientist could violate one of the basic tenets of chemistry: the inertness of noble gases. Bartlett insisted that he had, to the amusement and disbelief of some of his colleagues! The proof was in the new compound he had made. That orange-yellow solid was subsequently identified in laboratory studies as xenon hexafluoroplatinate (XePtF6), the world’s first noble gas compound.â

https://www.acs.org/content/acs/en/education/whatischemistry/landmarks/bartlettnoblegases.html

Since then over 100 noble gas compounds have been discovered.

Another example, of a well-established settled science being overturned, was a new discovery made by Israeli scientist Dan Shechtman, âwho suffered years of ridicule and even lost a research post for claiming to have found an entirely new class of solid materialâŚ when he observed atoms in a crystal he had made form a five-sided pattern that did not repeat itself, defying received wisdom that they must create repetitious patterns, like triangles, squares or hexagons.â

“People just laughed at me,” Shechtman recalled in an interview this year with Israeli newspaper Haaretz, noting how Linus Pauling, a colossus of science and double Nobel laureate, mounted a frightening “crusade” against him, saying: “There is no such thing as quasicrystals, only quasi-scientists.”

After telling Shechtman to go back and read the textbook, the head of his research group asked him to leave for “bringing disgrace” on the team. “I felt rejected,” Shechtman remembered.

http://www.reuters.com/article.....EP20111006

In 2011 Daniel Shechtman was awarded the Nobel Prize for his discovery of quasicrystals.

Ironically, Linus Pauling âwho mountedâŚ a âcrusadeââ against Shectman is one of the âfew chemists [who] questioned the absolute inertness of the noble gases,â before Bartlettâs discovery in 1962.

Is there such a thing as settled science? Can we use induction in science to establish any kind of universal truth claim? Arenât the so-called laws of nature universal truth claims? How were natural laws discovered? Are not they derived inductively?

39. 39
kairosfocus says:

JAD, yes, the provisionality of inductive generalisation has long been on the table, as I noted above by citing Newton in Opticks Query 31. This is part of why we have to be careful of imagining that likelihood ratio based decisions between hyp 1 and hyp 2 with required probabilities are not undermined by unknown unknowns that may pop up in the future. Your list is a good example, as is the discovery of dinosaur soft tissues. KF

40. 40
kairosfocus says:

F/N: On digging around I have found some similar thinking to the OP. Mark Andrews:

>>ABSTRACT: Inductive conclusions rest upon the Uniformity Principle, that similar events lead to similar results. The principle derives from three fundamental axioms: Existence, that the observed object has an existence independent of the observer; Identity, that the objects observed, and the relationships between them, are what they are; and Continuity, that the objects observed, and the relationships between them, will continue unchanged absent a sufficient reason. Together, these axioms create a statement sufficiently precise to be falsified. Simple enumeration of successful observations is ineffective to support an inductive conclusion. First, as its analytical device, induction uses the contrapositive form of the hypothesis; a successful observation merely represents the denial of the antecedent, from which nothing follows. Second, simple enumeration uses an invalid syllogism that fails to distribute its middle term. Instead, the inductive syllogism identifies its subject by excluding nonuniform results, using the contrapositive form of the hypotheses. The excluded data allows an estimate of the outer boundaries of the subject under examination. But an estimate of outer boundaries is as far as the inductive process may proceed; an affirmative identification of the content of the subject never becomes possible.>>

Here, we see recognition that the identity and stability of the world order is a crucial aspect of the case.

Yes, considered as contrasted to simple deductive arguments, induction looks like the proverbial threadbare poor cousin. Unsurprising, you are trying to judge the attempt to provide adequate but never certain support by an argument pattern that turns on entailment so given premises P, conclusions Q must strictly follow.

But then, whence P?

That brings in the debate over premises. In effect we believe P because of some warrant A. Which invites why A thence B, C . . . posing infinite regress, or question-begging circularity or finitely remote first plausibles F. Among which may be self-evident propositions S.

But we are now looking at worldview roots and in so doing we are dealing with comparative difficulties of various possible faith points. Absolute, cross the board certainty has vanished and in its place we must ponder reasonable faith in worldview first plausibles standing on comparative difficulties.

Boiling down, worldview level inference to best current explanation. Which is — lo and behold — an inductive pattern of argument as was noted previously.

In short, once we press hard deduction and induction are inextricably intertwined in the worldview roots of our reasoning.

That brings up an old trick, following Hume, Ayer et al. How do we justify induction, deductively or inductively, D vs I. If I this is deemed circular. If D, how can we claim sufficient grounds to expect future cases to match past observations leading to a pattern? (The uniformity principle debate.) Things look a little threadbare and skepticism seems to be ruling the roost!

But, as Dale Jacquette points out in How (Not) to Justify Induction,

Against Deduction

(1) Deduction cannot be inductively justified. Induction offers only probably true conclusions that are not strong enough to uphold the necessity of deductively valid inferences.

(2) Nor can deduction be deductively justified — that would be vi-ciously circular.
___________________

(3) If deduction could only be justified deductively or inductively, then deduction cannot be justified.

Are we at Mexican standoff of mutually assured destruction?

The argument continues:

The collective force of arguments (A) and (B) might be paraphrased more simply as the conclusion that reasoning of any kind cannot con-sistently hope to justify itself. Put this way, the objections in (A) and (B) do not sound quite as startling or revolutionary as when only (A)
is presented and the omission of (B) encourages the misleading impres-sion that induction is at a particular justificatory disadvantage vis-Ă -vis deduction. We have now seen on the contrary that induction and de-duction epistemically and justificationally are pretty much in the same leaky boat.

Are we all doomed to sink together, leaving hyperskepticism triumphant?

No.

The onward argument points to Aristotle, who “seems to recognize that trying to justify collectively the principles of logic by means of or appeal to logical principles is hopeless. Where we cannot derive we may need to presuppose. But what propositions are we right to presuppose?”

That is, we are again back at worldview roots and first plausibles defining faith points. What is a — or the most — reasonable faith?

Aristotle and Immanuel Kant millennia later in the Critique of Pure Reason with respect to the justification of synthetic a priori propositions of a scientific metaphysics offer a promising solution. 6 The method is to identify those principles whose truth we cannot even question without presupposing that they are true. The justification for such first principles of logic is then not merely that they are indispensable according to Ockhamâs razor, often uncertain in application when we are not exactly sure about what kinds of explanations we need to give and what entities or principles we absolutely need in order to make our explanations work, what reductions remain possible, and the like. Rather, the fundamental concepts and rules of logic and possibly other kinds of propositions are justified on the present proposal instead by virtue of their indispensability even in raising doubts about their truth or indispensability. 7

In short (though Kant would not like my terms) we are looking at self-evidence and coherence as part of our worldview roots thus first principles of right reason. As the OP discusses, the principle of identity and closely tied corollaries. Here, we may add that it is manifest that we inhabit a cosmos, not a chaos, i.e. an ordered system of reality. So, we credibly have a stable identity of our world i/l/o its core characteristics. Absent this recognition and its corollary of at least partly intelligible uniformity, we cannot start reasoning. Which is self-referential and existential.

To deny such is suicidal, intellectually and practically, so would be irresponsible and irrational. And indeed, lo and behold, skeptical objectors implicitly expect us to abide by such even as they try to cast doubt and undermine.

We may freely proceed on a reasonable, responsible faith basis tied to the principle of distinct identity.

Faith and reason, being clearly inextricably intertwined and entangled.

Locke is instructive:

[Essay on Human Understanding, Intro, Sec 5:] Men have reason to be well satisfied with what God hath thought fit for them, since he hath given them (as St. Peter says [NB: i.e. 2 Pet 1:2 – 4]) pana pros zoen kaieusebeian, whatsoever is necessary for the conveniences of life and information of virtue; and has put within the reach of their discovery, the comfortable provision for this life, and the way that leads to a better. How short soever their knowledge may come of an universal or perfect comprehension of whatsoever is, it yet secures their great concernments [Prov 1: 1 – 7], that they have light enough to lead them to the knowledge of their Maker, and the sight of their own duties [cf Rom 1 – 2, Ac 17, etc, etc]. Men may find matter sufficient to busy their heads, and employ their hands with variety, delight, and satisfaction, if they will not boldly quarrel with their own constitution, and throw away the blessings their hands are filled with, because they are not big enough to grasp everything . . . It will be no excuse to an idle and untoward servant [Matt 24:42 – 51], who would not attend his business by candle light, to plead that he had not broad sunshine. The Candle that is set up in us [Prov 20:27] shines bright enough for all our purposes . . . If we will disbelieve everything, because we cannot certainly know all things, we shall do muchwhat as wisely as he who would not use his legs, but sit still and perish, because he had no wings to fly.

A word to the wise

KF

41. 41
kairosfocus says:

PS: Why am I highlighting induction? Because, inductive reasoning is central to science and to general life, thus unsurprisingly to the ID debate. Tied, over the years I have seen objectors to ID raise questions over induction, sometimes IIRC imagining that an objection to induction in general is an objection to ID but not to science etc on the whole. In other cases, there is a conscious dismissal of inductive reasoning esp. i/l/o Popper’s Critical Rationalism or some derivative thereof. Others seem to think that broader induction is a loose extension (or anticipation) of Bayesian statistics. So, it seems important to our immediate purposes and to the wider issue of being responsible rational thinkers that we ponder the matter.

42. 42
kairosfocus says:

on black swans. (Feather, not Taleb’s surprises.)

The issue seems to be one of where the borders of swan-nish-ness lie, i.e. is feather colouration essential to being a swan?

Well, consider cygnets, baby swans, which a quick image search will show tend to be grey-ish; which makes sense for camouflage. (Of course one wonders about the yellow colour of many baby ducks! Or is that mostly domesticated varieties?)

Is a cygnet a swan?

It certainly is what an adult was on hatching.

So, arguably colour of feathers is not a core characteristic of being a swan. From which, we see that the archetype for being a swan should reckon with the normal observable range of variation in known and accepted cases and likewise family resemblance to members of the higher class of water birds etc.

Likewise, let’s go back to the Australian surprise: black adult swans.

If whiteness of feathers was a core characteristic, black swanlike birds would likely have been viewed as near-swans but not swans. Instead, they were recognised as swans but with an unexpected colour.

In short, other characteristics were seen as more central to identity.

The classic example is flawed, but flawed in interesting ways.

We here see that pattern recognition can be intuitive, rooted in experience. Indeed, ability to correctly abstract concepts embedding common characteristics is a key cognitive capability that we routinely exploit in education. (It is also obviously tied to reasoning on analogies.)

Similarly, it is a longstanding observation that experienced field biologists can spot and correctly assign novel members of taxons at a glance.

This points to the archetypes and clusters of core characteristics, also to how we modify these concepts.

In light of all of this, the reaction to black swans (but not to cygnets) seems to be instructive on how we form concepts and draw out inductive inferences. It seems there was an implicit adult-ness expectation when we pondered “swans,” so that cygnets don’t count against “swans are white.” But then, black swans are seen and we adjust the border of our concepts and are forced to ponder similar cases.

This also points to Lakatos’ core vs sacrificial auxiliary belt of hypotheses. Colour of adult swans was not core, or australian black swans would have been viewed differently.

I suspect, the duck-billed platypus also forced a serious revision of what it means to be a mammal. Hairy, egg-laying, warm blooded vertebrates — no wonder it was at first thought a practical joke or the like. So, we revise to think of placentals vs marsupials vs monotremes. (In addition to the platypus there are four species of spiny anteaters in New Guinea, as well there are fossils from as far as S America.)

The provisionality and crucial dependence on empirical support, need for tested reliability and for predictive power all come out.

More food for thought.

On an iconic case that seems to be a tad more complex than we thought. (Did we ponder the story of the ugly duckling?)

KF

43. 43
Bob O'H says:

Is there such a thing as settled science? Can we use induction in science to establish any kind of universal truth claim? Arenât the so-called laws of nature universal truth claims? How were natural laws discovered? Are not they derived inductively?

You’ve hit an important realisation: science isn’t (and can’t be) about truths. Even if there are Laws of Nature (i.e. rules by which the universe works) we can’t be sure we know them.

44. 44
kairosfocus says:

BO’H: induction generally cannot deliver utter absolute certainty beyond correction. But then, post Godel, neither can Math. We are left in Locke’s world of reasonable faith, warranted but not utterly infallible knowledge and responsible albeit fallible praxis. However, given self evidence, there are certain things which are certain, necessarily true and serve as yardsticks. First principles of right reason are where these start. Where, those are in fact certain and knowable laws of how the world works (BTW, with the core of mathematics as an integral part extended from distinct identity A vs ~A thus two-ness). In that context science seeks empirical reliability AND truth, the accurate description of reality. The notion that science is about utility of effective models unconstrained by truth-seeking leads to utterly undermining its ethos and credibility, thence ruin. Actually, it is broader, without truth as norm, value, ideal and goal responsible rationality collapses. KF

45. 45
EricMH says:

If we cannot be certain about anything, why are we certain of that fact?

46. 46
kairosfocus says:

EMH, excellent question, exposing a self-referential absurdity. This is part of why I have long stressed the significance of self-evident first truths as start points and yardsticks for reasoning. For math (parent of statistics) I highlight that many math facts and even systems were on the table long before the grand axiomatisation projects of the past 200 years, and these controlled how axiomatisations work. Without logic, you cannot have the study of the logic of structure and quantity, probably the best summary definition of math. Science then depends critically on both math and logic. Part of logic is the logic of support rather than valid demonstration; that is, inductive logic — which is especially important in science. Inextricably intertwined is the epistemological issue of warranting truths then the metaphysical (ontological) one of understanding the logic of being thus the well-springs of reality. Where, we must not overlook that rationality is morally governed by virtue of known duties to truth, right reason, fairness etc. It all hangs together. KF

47. 47
EricMH says:

Didn’t you say in 44 that math facts are no longer certain post Godel?

48. 48
kairosfocus says:

EMH, not math facts such as the structure of the number system or the expression 2 + 3 = 5 but axiomatic systems are known to be incomplete and are not guaranteed to be coherent either. KF

PS: Though this is post axiomatisation it is a case of a set of facts:

{} –> 0

{0} –> 1

{0,1} –> 2

{0,1,2} –> 3

. . .

49. 49
ET says:

Bob O’H:

Youâve hit an important realisation: science isnât (and canât be) about truths.

Linus Pauling disagrees:

âScience is the search for the truth–it is not a game in which one tries to beat his opponent, to do harm to others. We need to have the spirit of science in international affairs, to make the conduct of international affairs the effort to find the right solution, the just solution of international problems, and not an effort by each nation to get the better of other nations, to do harm to them when it is possible. I believe in morality, in justice, in humanitarianism.â

Albert Einstein concurs:

âBut science can only be created by those who are thoroughly imbued with the aspiration toward truth and understanding.â

50. 50
Ed George says:

ET

Linus Pauling disagrees

He also believed that high doses of vitamin C could cure the cold and cancer.

But Bob O’H is correct. Science is not about “Truth”. It is about postulating and modifying models to explain our observations.

51. 51
ET says:

Ed George:

He also believed that high doses of vitamin C could cure the cold and cancer.

He also won a Nobel Prize in a scientific field, chemistry.

I don’t know anyone who takes high doses of vitamin C who has caught a cold or has cancer. So perhaps it is more of a preventative measure. đ

But Bob OâH is correct.

No, he isn’t. Pauling’s words by far outweigh yours and Bob’s put together. And don’t forget Einstein.

Science is all about explaining the reality, ie the truth, behind what we observe.

52. 52
EricMH says:

If science is about explaining and predicting data with models, is it true or false to claim a model explains/predicts data effectively?

53. 53
Bob O'H says:

kf @ 44 –

BOâH: induction generally cannot deliver utter absolute certainty beyond correction. But then, post Godel, neither can Math.

I’m afraid you’ve mis-understood GĂśdel’s theorem. Post GĂśdel we can still say whether most mathematical statements are true of false. But he showed that there are some statements where this cannot be proved. So we still have certainty, just not about everything.

EricMH @ 52 – Define “effectively”!

54. 54
john_a_designer says:

The key point I wanted to make @# 36 & #38 is that natural law is something that is derived inductively through repeated observation of and experimentation upon physical phenomena we encounter in the world around us. These laws are not self-evidently true nor are they deduced from anything else except from the assumption or presupposition that ontologically the universe is an interdependent and coherent system of contingent physical things, and that epistemologically the human mind was somehow preadapted with the ability to discover how the physical world truly operates. In other words, science presupposes an ability for humans to discover truth about the universe which we inhabit.

Ironically, science is based on philosophical assumptions and presuppositions that cannot be established empirically or âscientifically.â That does not bode well for any world view that is naturalistic or materialistic.

55. 55
Ed George says:

EricMH

If science is about explaining and predicting data with models, is it true or false to claim a model explains/predicts data effectively?

I don’t see anything wrong with that. The big bang is a model that explains what we see quite effectively, even though we don’t know all of the details with certainty.

56. 56
ET says:

Actually the big bang had to be rescued by inflation in order to make sense of what we see.

57. 57
kairosfocus says:

BO’H:

I spoke to axiomatic systems for complex domains comparable to “Arithmetic,” not basic facts or results. The incompleteness theorems boil down, first, to the point on undecidables. That is no axiomatisation will be coherent and complete. Next, incompleteness of coherent systems: there is no process to build a system that is certainly coherent.

As you may need it, SEP, summary:

GĂśdel’s Incompleteness Theorems
First published Mon Nov 11, 2013; substantive revision Tue Jan 20, 2015

GĂśdel’s two incompleteness theorems are among the most important results in modern logic, and have deep implications for various issues. They concern the limits of provability in formal axiomatic theories. The first incompleteness theorem states that in any consistent formal system F within which a certain amount of arithmetic can be carried out, there are statements of the language of F which can neither be proved nor disproved in F. According to the second incompleteness theorem, such a formal system cannot prove that the system itself is consistent (assuming it is indeed consistent). These results have had a great impact on the philosophy of mathematics and logic. There have been attempts to apply the results also in other areas of philosophy such as the philosophy of mind, but these attempted applications are more controversial.

I infer from this that axiomatised grand logic model worlds (the big ticket systems of Math) are exercises in reasonable, responsible faith and linked praxis rather than definitively and infallibly true and complete systems. I also have noted that we have a considerable body of established facts, entities and results antecedent to the modern axiomatisation which therefore influenced it decisively.

In effect, even in math and science we must all live by faith. The challenge is to have a reasonable, responsible faith that grounds a prudent, reliably effective praxis.

KF

58. 58
kairosfocus says:

F/N: I suggest that if science and wider rationality are not accountable to truth, they will end in shipwreck. KF

59. 59
kairosfocus says:

PS: That implies a distinction between a model and a theory. A model is frankly false (a “simplification” or even an operational emulator [black vs open box makes little difference]) and seeks only to capture a reliable and reasonably accurate result. Theories normally aim at significant truthfulness and so will normally be more complex and less amenable to easy calculation etc.

60. 60
kairosfocus says:

EG, the big bang point is where you go by projecting back cosmological expansion per Hubble to a singularity which is a natural start point. 14 BY drops out as about right, which comports with cluster H-R patterns (where the main sequence branches off to the giants band), low proportion of white dwarfs etc. KF

61. 61
john_a_designer says:

If the speed of light not a universal constant what becomes of the big bang model?

62. 62
kairosfocus says:

JAD, is there good empirical evidence that this postulate of Relativity is wrong? KF

63. 63
EricMH says:

If all of the supports for our worldview are made of jelly, the whole thing collapses.

64. 64
kairosfocus says:

EMH, I infer, metaphorical sense. Yes, if a worldview’s core cannot bear the weight it must (another metaphor) the worldview will collapse. One point is that right reason is crucial, including adequate context for inductive reasoning. From OP on, I point to the principle of identity as giving a context for archetypes that bring out in-common characteristics that allow reasonable inference from initial cases to onward ones. The key point being that core characteristics and associated behaviours will be consistent from one instance to another. This points to a world model. Notice my discussion overnight on black swans, with the issue on feather colour as not being core. KF

65. 65
daveS says:

EricMH,

If all of the supports for our worldview are made of jelly, the whole thing collapses.

I would agree to some extent.

I also suspect there aren’t many/any “non-jelly” supports for the various worldviews. So if we expect to find a certain foundation, we are likely to be frustrated.

66. 66
kairosfocus says:

DS,

while it is true that all worldviews face difficulties so the key phil method is comparative difficulties, that is very different from grounds for despair or shrugging and saying it does not matter.

For one, evolutionary materialistic scientism/naturalism (which is ideologically and institutionally dominant in North America and W. Europe) is known to be self-referentially incoherent and self-falsifying in many ways. So, this can be set aside at the outset, together with its shibboleths against theism. That will at once clear most of the toxic, befuddling, polarising smog.

Next, we can start from error exists and from how it is equally self-evidently true that it is evil, wrong, wicked to kidnap, bind, gag, sexually assault and murder a young child for one’s sick pleasure. At once, epistemological and moral relativism shiver, crack, collapse.

Then, we can restore first principles of right reason to proper estimation, starting from distinct identity. Deductive and inductive logic duly guided by sound conscience can then begin to help us set the chaos we have made to rights.

I don’t know if we can now avert civilisational collapse with nukes in play, but we can begin to see a sound way forward.

Going beyond, no-one is seeking to claim that a worldview foundation can be erected on a basis of certain truths. The significance of inductive reasoning in our knowledge base and its inextricable intertwining with deductive reasoning long since put that to rest. We seek a reasonable, responsible faith with prudence and conscience as guides and guards not a pose of having cornered the market on the truth by our intellectual prowess and brilliance.

Remember, post Godel, this includes Math (and cf above for those who imagine this is a misreading).

Where of course this thread is looking at the role of inductive logic.

KF

PS: Here is my own exploration at 101 level: http://nicenesystheol.blogspot.....u2_bld_wvu

67. 67
daveS says:

KF,

while it is true that all worldviews face difficulties so the key phil method is comparative difficulties, that is very different from grounds for despair or shrugging and saying it does not matter.

Agreed. I would never claim it does not matter (and am not currently experiencing despair đ ).

68. 68
john_a_designer says:

The point of my question at #61 (If the speed of light is not a universal constant what becomes of the big bang model?) was that if the speed of light is not a universal constant then that completely undermines the big bang model. Indeed, not only is the big bang model undermined so is most of modern cosmology, astronomy and physics.

But who proved that the speed of light (âcâ) is a universal constant? From what I understand of the history of science it has never ever been questioned. For example, did Einstein ever question it? Has anyone ever attempted to prove the universality of c?

Is it a warranted assumption? How is it warranted? How is it warranted logically? âŚscientifically? âŚmetaphysically or philosophically?

I do think it is warranted as a philosophical (world view) assumption. But worldview assumptions cannot be proven to be true logically or scientifically. In other words, itâs impossible to do science unless you begin with some philosophical assumptions about the world. However, those assumptions do not emerge out of science itself.

69. 69
daveS says:

I believe the constancy of c has been questioned and tested. AFAIK, it was Einstein who first postulated that c is a constant, although it was already implicit in Maxwell’s Equations.

I take it to be a conclusion rather than an assumption, btw.

70. 70
kairosfocus says:

actually, the history goes the other way.

The aether theory of electromagnetic waves (including light) ruled the roost c 1880, after Maxwell’s triumph.

Michelson & Morley set out to measure drift rate relative to the aether, thus absolute velocity of earth; in effect a doppler shift plus velocity variation effect similar to for other waves. They used a delicate interference experiment, running it at different points around our orbit.

They could not find it.

This was one of the start-points for the modern physics revolution.

Einstein’s relativity started from, let’s take the experimental result seriously. Postulate: the laws of physics take the same simplest form in an inertial frame of reference. Postulate: in such an IFR, speed of light in vacuo will take the same constant value, c.

In effect, start from the experimental results that there is no aether drift.

The rest was history.

KF

71. 71
john_a_designer says:

Youâre missing the point, Dave. My question (and main point) was, âBut who proved that the speed of light (âcâ) is a universal constant?â Itâs never been PROVEN. How do you prove that something is universal? Universals are assumed to be true because they cannot be proven.

72. 72
daveS says:

Of course I agree you can’t prove it. But the empirical evidence suggests c is a universal constant. And the famous theory which postulates that c is a universal constant is very successful. Hence we (provisionally) conclude that it’s true. I don’t see any need to make assumptions here.

73. 73
john_a_designer says:

Dave,

I donât see any need to make assumptions here.

Maybe there isnât from a conceited, chauvinistic contemporary perspective but someone did begin with the assumption there are universal natural laws which govern the entire universe. For example, for Newtonian physics to work for the universe at large you need to assume Newtonâs laws of motion and gravity are universal. If no one had made that assumption we would still have no idea what keeps the planets of the solar system in orbit around the sun. Now we can say there is a preponderance of evidence that there are universal laws and constants. However, that hasnât always been true. Itâs a bit smug to take it all for granted.

74. 74
daveS says:

The weirdly hostile tone baffles me. But perhaps I’m misreading you.

In any case, it’s obvious to me that there are regularities, for example, in the physical world. I don’t have to assume there are such things as physical laws—I observe evidence for them every time I see a pendulum swinging or baseball player hitting a home run. OTOH, I highly doubt that I would have had the insight to conjecture that the gravitational force I observe on Earth also could account for planetary motion. So yes, Newton (and Einstein, and many others) were obviously much smarter than me. đ

75. 75
kairosfocus says:

JAD, in Physics, there is hardly ever a proof, just arrival at a framework which shows itself empirically reliable with strong predictive power. In this case, the observation came first as a surprise, there was no aether drift seen in the Michelson-Morley experiments, and this was one of a cluster of anomalies that showed that classical Physics was broken. As noted, Einstein took the empirical finding seriously, not as proved but as empirically warranted. From this came one of the key breakthroughs. KF

76. 76
john_a_designer says:

Youâre missing the point Iâm trying to make, Kf. While scientists like Newton and Einstein can be said to have discovered laws of nature, they only did so because they already believed that nature was law-like.

Who discovered that nature was law-like? Where did that idea come from? I think the idea goes back to at least to the ancient Greeks– maybe even further. In the Book of Job, God asks Job, âDo you know the laws that govern the heavens, and can you make them rule the earth?â (38:33, Contemporary English Version)

C.S, Lewis I think put it best when he said:

âMen became scientific because they expected Law in Nature, and they expected Law in Nature because they believed in a Legislator [or âlawgiverâ.] In most modern scientists this belief has died: it will be interesting to see how long their confidence in uniformity survives it. Two significant developments have already appeared â the hypothesis of a lawless sub-nature, and the surrender of the claim that science is true. We may be living nearer than we suppose to the end of the Scientific Age.â

Is the belief or expectation that nature is law-like something we arrived at by rigorously following rules of inductions or is it something inborn and intuitive? Some of the research carried out by child psychologist Jean Piaget seems to suggest that it is the latter.

77. 77
kairosfocus says:

I have been busy elsewhere (e.g. here), pardon.

I suggest, first, that we find mechanical regularities in the world such as the reliable annual, monthly and daily cycles. Also, food items remain that way as a rule and when they don’t there are reasons: spoilage, contamination, poison etc. Walking, breathing, sitting on chairs etc all rely on an often overlooked pattern of natural regularities. We can also identify such in our thoughts, e.g. the principle of distinct identity and linked things, including number etc.

Where, we also find ourselves under duties of care to truth, right reason, fairness etc. Moral government.

A world of law that indeed raises the question of a law-giver. So, ethical theism in the Judaeo-Christian tradition leads to a worldview that expects law and law that is in material part intelligible rather than inscrutable. C S Lewis was right. So was Paul in Rom 1 as I alluded to.

I would further suggest that we are here exploring first principles of right reason, noting that we reason inductively as a matter of fact where this is too often treated as though it were little more than fallacies. Such reasoning based on support rather than entailment is likely most of practical thinking. So, it is vital for us to come to a better appreciation.

As the OP indicates, on pondering, I am — yet again — led to the principle of identity. A is itself i/l/o core characteristics. So, we can look at the world or a relevant aspect and on case studies look for stable patterns. Providing we have plausible reason to tie observed patterns P to such archetypal attributes, we should then be able to reasonably expect that in different or new situations, the pattern will continue to be stable. Where a strong track record of successful prediction is itself an empirical observation.

For example, the colour of a pendulum bob is obviously less relevant to its swinging behaviour than other candidate influences; the surprising one being that for small swings (and negligible air resistance) mass also does not affect period. Similarly, given cygnet colouration and the highly variable colour of the wider duck family, to expect that as all observed adult swans to date have been white, all swans will be white. The fact that settlers recognised black swans as swans despite colour is a clue.

Further, given that well regulated stability and our responsiveness to such are requisites of survival, it should be no surprise that there is an in-built biological expectation. But that does not prove that we are imposing perceived order on a chaos; we are back at senses fit to operate in their proper environment through accurate or reasonably accurate perceptions.

Another long story.

Of course, I believe that Science and Math can discover and responsibly (albeit provisionally) warrant truths about our world. But we must walk by faith, post the rise of modern physics and post Godel.

Reasonable, responsible faith.

KF

78. 78
john_a_designer says:

Kf,

The problem of emphasizing logical thinking is that logical thinking has been supplanted, especially on-line, by a smug kind of scientism. Indeed, if you are truly âscientisticâ in your thinking you donât have to give logically sound arguments which rely on the rules of logic all you need to do is say âscience saysâ or something thatâs rhetorically equivalent and be willing to not give in. In other words, instead of making an actual argument you become argumentative. This is something we see here (at least I do) on the part of our regular interlocutors all the time.

Calvin Beisner, at Town Hall (ht BA77) gives a good summary of what is going on in America and other western cultures.

The problem is in thinking that science is âthe basis for knowledge.â It isnât. It never has been. It never can be.

That is because scienceâin terms of scientific method, testing hypotheses by real-world observationâcannot justify any truth judgments based solely on empirical observation.

Empirical observation alone doesnât tell us how to sort the many different stimuli our senses receive at any given moment. It doesnât tell me why I should collect the stimuli of patterns of light and darkness on my computer screen and identify them as coherent, meaningful text and pictures while ignoring the stimuli of sound (our air conditioning system and my keystrokes and the vibration of my cell phone and kids shouting in the community pool across the street) touch (the coolness and hardness of the floor under my feet, the texture of my cotton shirt), smell (the soap residue on my just-washed hands), taste (the lingering flavor of my raisin bran with the more recent flavor of my green tea), and sightâŚ

Neither does mere sensory stimulation tell us the basic laws of thought (identity, contradiction, and excluded middle) or the rules of logical inference (see my âSummary of Major Concepts, Principles, and Functions of Logicâ), yet those, along with the categories by which we untangle various sounds, sights, smells, textures, and tastes and group them into defined, external objects and events the recognition of which we share with others, are absolutely indispensable to all reasoning, including scientific reasoning about the external world.

And finally, empirical observation doesnât tell us there are objects external to ourselves. It only tells me that Iâm having experiencesâŚ But it cannot tell me those experiences are stimulated by things external to me. To be confident of that (and note the root of âconfident,â fides, faith), I must know something before thinking about my sensationsâŚ that thereâs an external world and that itâs understandable according to the structure of my mind and that my mind and body relate to it and each other in a way that facilitates my understanding it trulyâŚ

https://townhall.com/columnists/calvinbeisner/2014/07/23/the-threat-to-the-scientific-method-that-explains-the-spate-of-fraudulent-science-publications-n1865201

I remember just after graduating from high school in the 1960âs reading a booklet by Francis Schaeffer entitled Escape from Reason. At the time I didnât really understand it– maybe it was because I really didnât think that people were that irrational. Now, however, after spending 12 years dialoguing with people pushing naturalism, materialism or secular progressivism on the internet I understand what Schaeffer was talking about. But no doubt because he lived in Europe he encountered this kind of thinking a lot sooner than those of us who live in âfly over countryâ here in the U.S.A.

79. 79
kairosfocus says:

JAD, you described an example of failure to correctly understand and apply logic, compounded by the usual categorical errors of naturalism, assuming that science monopolises serious knowledge. The study of knowledge and its conditions is of course another branch of philosophy, epistemology. Logic, of course is a main branch of philosophy. Rhetoric is the study of persuasion, which Aristotle traces to the applied power of pathos, ethos, logos; in effect emotions, character/credibility, the facts and logic. Emotions are most persuasive but are no better than underlying perceptions and judgements. Authority or credibility is no better than facts or logic, where 99% of arguments rely on this inductive (modern sense) appeal. Only facts and logic can actually warrant but that is an acquired taste. Schaeffer saw many, many things that we need to listen to again. KF

80. 80
kairosfocus says:

F/N: I took time to follow up Calvin Beisner.

Here is his tipsheet on logic (at Wayback Machine): https://web.archive.org/web/20090824011241/http://www.ecalvinbeisner.com/freearticles/Logicsummary.pdf

In his essay that you excerpted, notice . . . I echo and amplify:

Empirical observation [–> the core of scientific evidence] alone doesnât tell us how to sort the many different stimuli our senses receive at any given moment. It doesnât tell me why I should collect the stimuli of patterns of light and darkness on my computer screen and identify them as coherent, meaningful text and pictures while ignoring the stimuli of sound (our air conditioning system and my keystrokes and the vibration of my cell phone and kids shouting in the community pool across the street) touch (the coolness and hardness of the floor under my feet, the texture of my cotton shirt), smell (the soap residue on my just-washed hands), taste (the lingering flavor of my raisin bran with the more recent flavor of my green tea), and sight (the dark corners of my desk, the brightness of my windows, the many colors of the books on my bookcases, and the cloudy sky and green trees out my window) all into one thing and call it a schmooglewop.

Neither does mere sensory stimulation tell us the basic laws of thought (identity, contradiction, and excluded middle) or the rules of logical inference [–> which pivot on distinct identity, world W = {A|~A} so A is itself i/l/o core characteristics, no x in W is A AND ~A, any x is A X-OR ~A, and of course A vs ~A gives twoness thus opens up number systems and Math, an abstract discipline that studies the logic of structure and quantity, being foundational to but distinct from science] (see my âSummary of Major Concepts, Principles, and Functions of Logicâ [–> his link is dead cf. Wayback Machine]), yet those, along with the categories by which we untangle various sounds, sights, smells, textures, and tastes and group them into defined, external objects and events the recognition of which we share with others, are absolutely indispensable to all reasoning, including scientific reasoning about the external world.

And finally, empirical observation doesnât tell us there are objects external to ourselves. It only tells me that Iâm having experiences. (You, too? By mere sensory observation I canât know that.) But it cannot tell me those experiences are stimulated by things external to me. To be confident of that (and note the root of âconfident,â fides, faith), I must know something before thinking about my sensations (if I may even be said to have any, according to the definition of âsensationâ inherent in empiricismâbut thatâs another subject): that thereâs an external world and that itâs understandable according to the structure of my mind and that my mind and body relate to it and each other in a way that facilitates my understanding it truly.

In short, one must begin with axioms, presuppositions. [–> hence worldviews and the challenge of reasonable, responsible faith requiring comparative difficulties] As a corollary to Herbert Steinâs Law (âIf something cannot go on forever, it will stop.â) I posit Beisnerâs Law: âIf something not eternal does not start, it cannot continue.â [–> This points to necessary vs contingent beings and so to what is required as framework for a world, especially one inhabited by rational, responsible, morally governed creatures]

Though most modern scientists are empiricists, they are unaware that empiricismâreal, consistent empiricismâleads directly not to skepticism, which is a good thing (1 Thessalonians 5:21, âTest all things, hold fast what is good.â) but to irrationalism, which, frankly, is what is taking over the world of science, particularly with the rise of post-normal, highly politicized science . .

Much food for thought there.

KF

PS: Given how “skepticism” is too often used, I prefer “critical awareness.” Paul is right: test, hold on to the good; which implies that key truths are intelligible, can be tested and warranted, also that the good can be rationally discerned from the evil. That is, there is moral government in our rationality and there is also moral knowledge.

(This last cuts across how facts vs. values is too often used, to suggest the IS-OUGHT gap cannot be bridged. In fact it is bridged in the root of being by the inherently good and wise creator God, a necessary and maximally great being. Who, is worthy of loyalty and of the responsible, reasonable service of seeking, testing and doing the good that accords with our evident nature.)

81. 81
kairosfocus says:

F/N2: He goes on:

As such diverse historians and philosophers of science as Alfred North Whitehead, Pierre Duhem, Loren Eiseley, Rodney Stark, and many others have observed, and as I pointed out in two of my talks at the Ninth International Conference on Climate Change (ICCC), scienceânot an occasional flash of insight here and there, but a systematic, programmatic, ongoing way of studying and controlling the worldâarose only once in history, and only in one place: medieval Europe, once known as âChristendom,â where that Biblical worldview reigned supreme. That is no accident. Science could not have arisen without that worldview.

Although many, probably most, of those at the ICCC werenât, like me, evangelical Christians, I heard only one complaint about that message. It was from another friend, a geologist who is, I think, either atheist or agnostic. She protested that many atheists and agnostics are good scientists. That is true, and sheâs one of them. But while people who donât embrace the Biblical worldview and ethic can practice science for a while, little by little the foundation, resting on sand instead of solid rock, collapses, and with it so does the superstructure, in the midst of the winds and rains of money and power, which Pat Michaels rightly identifies as (among other things) the perverse incentives now firmly embedded in the whole way science gets funded and practiced in todayâs world.

Science will restore its trustworthiness only when, and only to the degree that, its practitioners rediscover, and re-embrace, the Biblical worldview that is its only firm foundation.

Again, food for thought.

KF

82. 82
kairosfocus says:

F/N: Notice the use of an analogy (and adaptation of a classic parable) in Beisner’s argument:

while people who donât embrace the Biblical worldview and ethic can practice science for a while, little by little the foundation, resting on sand instead of solid rock, collapses, and with it so does the superstructure, in the midst of the winds and rains of money and power, which Pat Michaels rightly identifies as (among other things) the perverse incentives now firmly embedded in the whole way science gets funded and practiced in todayâs world.

Would anyone be prepared to dismiss the point simply on the fact that it is an analogy and analogies are inherently weak, almost fallacies? Or, that it appeals to literary forms?

Or, would it not be wiser to examine the structures and dynamics, then judge or evaluate whether or not such dynamics are relevant and substantially correct, even though presented in the guise of a literary descriptive model or metaphor. The latter being a compressed simile, instead of A is like B, A is B.

Key dynamics: foundations resting on geological substructures and ability to stand storms that undermine the substructure if loose material. Support to a functional superstructure which is valued for its utility. Effect of undermining the foundation of a building. Storms.

The proper response is not to cry, “foundationalism” and sneeringly dismiss. Instead, let us examine whether the argument renders good support to conclusions.

That money and power can undermine and destroy the integrity of institutions and movements is a notorious fact. Science is an institutionalised movement of great utility in our civilisation. That science is prone to abuses including funding-driven agendas and power-driven agendas is notorious. Ideology is also a problem. New ideas often advance one funeral at a time. (Notice, advance is yet another metaphor.)

It is true that ” scienceânot an occasional flash of insight here and there, but a systematic, programmatic, ongoing way of studying and controlling the worldâarose only once in history, and only in one place: medieval Europe, once known as âChristendom,â where that Biblical worldview reigned supreme.” Some would debate could not have come about otherwise, but the historically unique fact is a sobering point.

Likewise, it is obvious that our life of the mind is governed by known duties to truth, right reason, fairness/justice, neighbourliness etc. This is another way to say that rationality has responsibilities and is inescapably morally governed. Thus the IS-OUGHT gap is pivotal. (Another metaphor!) That — post Hume (and post Euthyphro) — points to the only level that it can be bridged, the world-root. Thence, we see the only serious candidate (others invariably cannot bridge), ethical theism. Which, in our civilisation comes to us through the Bible and the Judaeo-Christian tradition.

Where, to fuse is and ought, we see the inherently good creator God, a necessary and maximally great being; worthy of loyalty and of the responsible, reasonable service of doing the good that accords with our evident nature. In the tradition, that God has spoken redemptively in the Scriptures and has come in fulfillment of prophecies of Messiah as Saviour, Lord, healer, transformer. So, major cultural impact is expected and manifest; including in science, education, technology, business and economics.

Countervailing movements or influences that undermine moral government of the mind and of knowledge are patently dangerous. Dirty money and abusive, corrupt power are clear examples. Amoral worldviews that invite nihilism are another relevant danger.

The metaphor injects another relevant point: seldom are substructures swept away all at once. There is a gradual undermining until critical mass is attained. The rate depends on the intensity of the storm.

Cultural habits and attachments to principle would retard corruption, but the undermining of cultural support will weaken such until a critical point is reached. Then, tipping point leads to spectacular disintegration and eventual collapse. The spreading of structural cracks is a further metaphor that invites itself, noting that such can be gradual or rapid, often creeping along to critical length then exploding suddenly.

One may reject such an analysis, but that would need to be on a substantial basis — there we go again — rather than a rhetorical brush-off.

KF

PS: Notice an evident case in point where an institutionally dominant controversial (and arguably potentially ruinous) agenda uses power to characterise attempts to critique and reform on objective criteria tracing to genetics and manifest in body form and major organ systems critical to reproduction — sex — as an “assault.” This is the now habitual move of stigmatising those who would question the agenda; in some hands that can then turn into accusations of “hate” and by that further stigmatisation subjected to the further act of censorship. Eventually, it is likely that credibility of co-opted institutions will collapse, wreaking havoc.