Uncommon Descent Serving The Intelligent Design Community

For record: Questions on the logical and scientific status of design theory for objectors (and supporters)

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Over the past several days, I have been highlighting poster children of illogic and want of civility that are too often found among critics to design theory – even, among those claiming to be standing on civility and to be posing unanswerable questions, challenges or counter-claims to design theory.

I have also noticed the strong (but patently ill-founded) feeling/assumption among objectors to design theory that they have adequately disposed of the issues it raises and are posing unanswerable challenges in exchanges

A capital example of this, was the suggestion by ID objector Toronto, that the inference to best current explanation used by design thinkers, is an example of question-begging circular argument. Here, again is his attempted rebuttal:

Kairosfocus [Cf. original Post, here]: “You are refusing to address the foundational issue of how we can reasonably infer about the past we cannot observe, by working back from what causes the sort of signs that we can observe. “

[Toronto:] Here’s KF with his own version of “A concludes B” THEREFORE “B concludes A”.

(Yes, as the links show, this is a real example of the type of “unanswerable” objections being touted by opponents of design theory. Several more like this are to be found here and here, in the recent poster-child series.)

But, it should be obvious that the abductive argument pioneered in science by Peirce addresses the question of how empirical evidence can support a hypothesis or explanatory model (EM) as a “best explanation” on an essentially inductive basis, where the model is shown to imply the already known observations, O1 . . . On, and may often be able to predict further observations P1 . . . Pn:

EM = > {O1, O2, . . . On}, {P1, P2, . . . Pm}

Now, the first problem here is that there is a counterflow between the direction of logical implication, from EM to O’s and P’s, and that of empirical support, from O’s and P’s to EM. It would indeed be question-begging to infer from the fact that EM – if true – would indeed entail the O’s and P’s, plus the observation of these O’s and P’s, that EM is true.

But, guess what: this is a general challenge faced by all explanatory models or theories in science

For, in general, to infer that “explained” O’s and P’s entail the truth of EM, would be to commit a fallacy, affirming the consequent; essentially, confusing that EM being so is sufficient for O’s and P’s to be so, with that the O’s and P’s also therefore entail that EM is so.

That is, implication is not equivalence.

(One rather suspects that, Toronto was previously unaware of this broad challenge to scientific reasoning. [That would be overwhelmingly likely, as the logical strengths and limitations of the methods and knowledge claims of science are seldom adequately taught in schools and colleges . . . and to call for this – as has happened in Louisiana etc, is too often treated by advocates of evolutionary materialism with talking points that this is “obviously” an attempt to inject the Creationism bogeyman into the hallowed halls of “science” education.] So, quite likely, Toronto has seen the problem for the first time in connexion with attempts to find objections to design theory and has assumed that this is a problem that is a peculiar challenge to that suspect notion. But, plainly, it is not.)

The answer to this challenge, from Newton forward, has been to acknowledge that scientific theories are to be empirically tested and shown to be reliable so far, but are subject to correction in light of new empirical evidence and/or gaps in logic. Provisional knowledge, in short. Yet another case where the life of reason must acknowledge that trust – the less politically correct but apt word is: faith – is an inextricable, deeply intertwined component of our systems of knowledge and our underlying worldviews.

But, a second challenge emerges.

For, explanatory models are often not unique. We may well have EM1, EM2, . . . EMk, which may actually be empirically equivalent, or may all face anomalies that none are able to explain so far. So, how does one pick a best model, EMi, without begging big questions?

It is simple to state – but far harder to practice: once one seriously compares uncensored major alternative explanatory models on strengths, limitations and difficulties regarding factual adequacy, coherence and explanatory power, and draws conclusions on a provisional basis, this reasonably warrants the best of the candidates. That is, if there is a best candidate. (Sometimes, there is not. In that case, we live with alternatives, and in a surprising number of cases, it has turned out on further probing that the models are mathematically equivalent or are linked to a common underlying framework, or are connected to underlying worldview perspectives in ways that do not offer an easy choice.)

Such an approach is well within the region of inductive reasoning, where empirical evidence provides material support for confidence in – but not undeniable proof of – conclusions. Where, these limitations of inductive argument are the well known, common lot we face as finite, fallible, morally struggling, too often gullible, and sometimes angry and ill-willed human beings.

When it comes to explanatory models of the deep past of origins, we face a further challenge.

For, we cannot inspect the actual deep past, it is unobservable. (There is a surprisingly large number of unobserved entities in science, e.g. electrons, strings, the remote past and so forth. These, in the end are held on an inference to best explanation basis in light of connexions to things we can and do observe. That is, they offer elegantly simple unifying explanatory integration and coherence to our theories. But, we must never become so enamoured of these constructs that we confuse them for established fact beyond doubt or dispute. Indeed, we can be mistaken about even directly observable facts. [Looks like that just happened to me with the identity of a poster of one comment a few days ago, apologies again for the misidentification.])

So, applying Newton’s universality principle, what we do is to observe the evident traces of the remote past. We then set up and explore circumstances in the present, were we can see if there are known causal factors that reliably lead to characteristic effects that are directly comparable to the traces of the past. When that is so, we have a basis for inferring that we can treat the traces from the past as signs that the same causal factor is the best explanation.

Put in such terms, this is obviously reasonable.

Two problems crop up. First, too often, in origins science, there is a resort to a favoured explanation despite the evident unreliability of the signs involved, because of the dominance of a school of thought that suppresses serious alternatives. Second, there can be signs that are empirically reliable that cut across the claims of a dominant school of thought. Design theorists argue that both of these have happened with the currently dominant evolutionary materialist school of thought. Philip Johnson’s reply to Richard Lewontin on a priori materialism in science is a classic case in point – one that is often dismissed but (kindly note, Seversky et al) has never been cogently answered:

For scientific materialists the materialism comes first; the science comes thereafter. [[Emphasis original] We might more accurately term them “materialists employing science.” And if materialism is true, then some materialistic theory of evolution has to be true simply as a matter of logical deduction, regardless of the evidence. That theory will necessarily be at least roughly like neo-Darwinism, in that it will have to involve some combination of random changes and law-like processes capable of producing complicated organisms that (in Dawkins’ words) “give the appearance of having been designed for a purpose.”  

. . . .   The debate about creation and evolution is not deadlocked . . . Biblical literalism is not the issue. The issue is whether materialism and rationality are the same thing. Darwinism is based on an a priori commitment to materialism, not on a philosophically neutral assessment of the evidence. Separate the philosophy from the science, and the proud tower collapses. [[Emphasis added.] [[The Unraveling of Scientific Materialism, First Things, 77 (Nov. 1997), pp. 22 – 25.]

This example (and the many others of like ilk) should suffice to show that the objectors to design theory do not have a monopoly on scientific or logical knowledge and rationality, and that they can and do often severely and mistakenly caricature the thought of design thinkers to the point of making outright blunders. Worse, they then too often take excuse of that to resort to ad hominem attacks that cloud issues and unjustly smear people, polarising and poisoning the atmosphere for discussion. For instance, it escapes me how some could ever have imagined – or imagined that others would take such a claim as truthful – that it is a “lighthearted” dig to suggest that I would post links to pornography.

Such a suggestion is an insult, one added to the injury of red herrings led away to strawmannish caricatures and dismissals.

In short, there is a significant problem among objectors to design theory that they resort to a habitual pattern of red herring distractors, led away to strawman caricatures soaked in poisonous ad hominem attacks, and then set alight through snide or incendiary rhetoric. Others who do not go that far, enable, tolerate or harbour such mischief. And, at minimum, even if there is not a resort to outright ad hominems, there is a persistent insistence on running after red herrings on tangents to strawman caricatures, and a refusal to accept cogent corrections of such misrepresentations.

That may be angrily brushed aside.

So, I point out that the further set of problems with basic logic, strawman caricatures and personal attacks outlined in the follow up post here. Let us pick up a particular example of the evasiveness and denial of well-established points, also from Toronto:

Physics both restricts and insists on different combinations of “information”.

Why is the word information in scare quotes?

Because, believe it or not, as has been repeatedly seen at UD, many objectors to design theory want to contend that there is no algorithmic, digitally – i.e. discrete-state — coded specifically functional (and complex) information in D/RNA. In reply, I again clip from Wikipedia, speaking against known ideological interest:

The genetic code is the set of rules by which information encoded in genetic material (DNA or mRNA sequences) is translated into proteins (amino acid sequences) by living cells.

The code defines how sequences of three nucleotides, called codons, specify which amino acid will be added next during protein synthesis.

Hopefully, these examples should suffice to begin to clear the air for a serious focus on substantial issues.

So, I think the atmosphere has now – for the moment – been sufficiently cleared of the confusing and polarising smoke of burning, ad hominem soaked strawman caricatures of design theory to pose the following questions. They are taken from a response to recent comments (with slight adjustments), and were – unsurprisingly, on track record – ignored by the objector to whom they were directed.

I now promote them to the level of a full, duly headlined UD post:

1: Is argument by inference to best current explanation a form of the fallacy of question-begging (as was recently asserted by design objector “Toronto”)? If you think so, why?

2: Is there such a thing as reasonable inductive generalisation that can identify reliable empirical signs of causal factors that may act on objects, systems, processes or phenomena etc., including (a) mechanical necessity leading to low contingency natural regularity, (b) chance contingency leading to stochastic distributions of outcomes and (c) choice contingency showing itself by certain commonly seen traces familiar from our routine experiences and observations of design? If not, why not?

3: Is it reasonable per sampling theory, that we should expect a chance based sample that stands to the population as one straw to a cubical hay bale 1,000 light years thick – rather roughly about as thick as our galaxy – more or less centred on Earth, to pick up anything but straw (the bulk of the population)? If you think so, why (in light of sampling theory – notice, NOT precise probability calculations)? [Cf. the underlying needle in a haystack discussion here on.]

4: Is it therefore reasonable to identify that functionally specific complex organisation and/or associated information (FSCO/I, the relevant part of Complex Specified Information as identified by Orgel and Wicken et al. and as later quantified by Dembski et al) is – on a broad observational base – a reliable sign of design? Why or why not?

5: Is it reasonable to compare this general analysis to the grounding of the statistical form of the second law of thermodynamics, i.e. that under relevant conditions, spontaneous large fluctuations from the typical range of the bulk of [microstate] possibilities will be vanishingly rare for reasonably sized systems? If you think not, why not?

6: Is digital symbolic code found to be stored in the string-structure configuration of chained monomers in D/RNA molecules, and does such function in algorithmic ways in protein manufacture in the living cell? If, you think not, why not in light of the generally known scientific findings on transcription, translation and protein synthesis?

7: Is it reasonable to describe such stored sequences of codons as “information” in the relevant sense? Why or why not?

8: Is the metric, Chi_500 = Ip*S – 500, bits beyond the solar system threshold and/or the comparable per aspect design inference filter as may be seen in flowcharts, a reasonable quantification or procedural application of the set of claims made by design thinkers? Or, any other related or similar metric, as has been posed by Durston et al, or Dembski, etc? Why, or why not – especially in light of modelling theory?

9: Is it reasonable to infer on this case that the origin of cell based life required the production of digitally coded FSCI — dFSCI — in string data structures, together with associated molecular processing machinery [cf. the vid here], joined to gated encapsulation, metabolism and a von Neumann kinematic self replicator [vNSR]? Why or why not?

10: Is it reasonable to infer that such a vNSR is an irreducibly complex entity and that it is required before there can be reproduction of the relevant encapsulated, gated, metabolising cell based life to allow for natural selection across competing sub populations in ecological niches? Why or why not? (And, if you think not, what is your empirical, observational basis for thinking that available physical/chemical forces and processes in a warm little pond or the modern equivalent, can get us, step by step, by empirically warranted stages, to the living cell?)

11: Is it therefore a reasonable view to infer – on FSCO/I, dFSCI and irreducible complexity as well as the known cause of algorithms, codes, symbol systems and execution machinery properly organised to effect such – that the original cell based life is on inference to best current explanation [IBCE], credibly designed? Why, or why not?

12: Further, as the increments of dFSCI to create dozens of major body plans is credibly 10 – 100+ mn bits each, dozens of times over across the past 600 MY or so, and much of it on the conventional timeline is in a 5 – 10 MY window on earth in the Cambrian era, is it reasonable to infer further on IBCE that major body plans show credible evidence of design? If not, why not, on what empirically, observationally warranted step by step grounds?

13: Is it fair or not fair to suggest that on what we have already done with digital technology and what we have done with molecular nanotech applied to the cell, it is credible that a molecular nanotech lab several generations beyond Venter etc would be a reasonable sufficient cause for what we see? If not, why not? [In short, the issue is: is inference to intelligent design specifically an inference to “supernatural” design? In this context, what does “supernatural” mean? “Natural”? Why do you offer these definitions and why should we accept them?]

14: Is or is it not reasonable to note that in contrast to the tendency to accuse design thinkers of being creationists in cheap tuxedos who want to inject “the supernatural” into science and so to produce a chaotic unpredictability:

a: From Plato in The Laws Bk X on, the issue has been explanation by nature (= chance + necessity) vs ART or techne, i.e. purposeful and skilled intelligence acting by design,

b: Historically, modern science was largely founded by people thinking in a theistic frame of thought and/or closely allied views, and who conceived of themselves as thinking God’s creative and sustaining thoughts — his laws of governing nature — after him,

c: Theologians point out that the orderliness of God and our moral accountability imply an orderly and predictable world as the overwhelming pattern of events,

d: Where also, the openness to Divine action beyond the usual course of nature for good purposes, implies that miracles are signs and as such need to stand out against the backdrop of such an orderly cosmos? [If you think not, why not?]

15: In light of all these and more, is the concept that we may legitimately, scientifically infer to design on inductively grounded signs such as FSCO/I a reasonable and scientific endeavour? Why or why not?

16: In that same light, is it the case that such a design theory proposal has been disestablished by actual observations contrary to its pivotal inductions and inferences to best explanations? (Or, has the debate mostly pivoted on latter-day attempted redefinition of science and its methods though so-called methodological naturalism that a priori undercuts the credibility of “undesirable” explanatory models of the past?) Why do you come to your conclusion?

17: Is it fair to hold – on grounds that inference to the best evolutionary materialism approved explanation of the past is not the same as inference to the best explanation of the past in light of all reasonably possible causal factors that could have been at work – that there is a problem of evolutionary materialist ideological dominance of relevant science, science education, and public policy institutions? Why or why not?

The final question for reflection raises issues regarding the ethical-cultural implications for views on the above for origins science in society:

18: In light of concerns raised since Plato in The Laws Bk X on and up to the significance of challenge posed by Anscombe and others, that a worldview must have a foundational IS that can objectively ground OUGHT, how does evolutionary materialism – a descriptive term for the materialistic, blind- chance- and- necessity- driven- molecules- to- Mozart view of the world – cogently address morality in society and resolve the challenge that it opens the door to the rise of ruthless nihilistic factions whose view is in effect that as a consequence of living in a materialistic world, knowledge and values are inherently only subjective and/or relative so that might and manipulation make ‘right’?

 (NB: Those wishing to see how a design theory based view of origins would address these and related questions, (i) cf. the 101 level survey here on. Similarly, (ii) you are invited to look at the UD Weak argument correctives here, (iii) at the UD Glossary here, at UD’s definition of ID here, (iv) at a general purpose ID FAQ here, (v) at the NWE survey article on ID here (the Wikipedia one being an inaccurate and unfair hit piece) and (vi) at the background note here on.)

So, objectors to design theory, the ball is in your court. (NB: Inputs are also welcome from design theory supporters.)

How, then, do you answer? On what grounds? With what likely consequences for science, society, and civilisation? END

Comments
CR: Your strawman caricatures have been addressed several times over. KFkairosfocus
September 11, 2012
September
09
Sep
11
11
2012
12:33 PM
12
12
33
PM
PDT
@KF Still waiting for you to address #93. Also, from a comment on another thread...
@KF#337 See here and here. To summarize, Salman is a justificationist, yet he has not formulated a principle of induction that actually works, in practice. Popper presents a straight forward logical argument as to why justification is impossible, which Salman did not refute. The same of criticism addressed here in this very thread where Salman exhibits the second of three attitudes. IOW, these criticisms reflect confusion about Popper’s attitude. And they stem from the fact that, as justificationists, they simply cannot see it any other way. As such, they assume justification must be true. And there are plenty more misrepresentations of Popper. A few of which can be found here. Again, to reiterate…
Specifically, the fundamental flaw in creationism (and its variants) is the same fundamental flaw in pre-enlightenment, authoritative conceptions of human knowledge: its account of how the knowledge in adaptations could be created is either missing, supernatural or illogical. In some cases, it’s the very same theory, in that specific types of knowledge, such as cosmology or moral knowledge, was dictated to early humans by supernatural beings. In other cases, parochial aspects of society, such as the rule of monarchs in governments or the existence of God, are protected by taboos or taken so uncritically for granted that they are not recognized as ideas.
Inductivism is suffers from the same fundamental flaw.
critical rationalist
September 11, 2012
September
09
Sep
11
11
2012
11:47 AM
11
11
47
AM
PDT
F/N For those puzzled by the debate over verification, falsification, corroboration and induction, this little introductory note on Popper’s challenges with corroboration should suffice to show that tested empirical support for a claim is still important and provides a degree of warrant for accepting (provisionally of course) those which have a good track record of testing and prediction. In short, inference to best current explanation backed up by empirical testing and support, is a serious view and induction is not dead. KFkairosfocus
September 10, 2012
September
09
Sep
10
10
2012
04:07 AM
4
04
07
AM
PDT
Onlookers: Coming back after some days. CR tried to carry forth the same basic scheme of objections elsewhere, I have answered here. Let me clip: ___________ >> Let me speak to points from 328 to show what I mean: 1: CR, 328: FSCIO/I isn’t well defined. Here, first, you don’t even seem to bother to get the abbreviation right: Functionally Specific Complex organisation and associated Information, FSCO/I. You also fail to address the way that it is developed, e.g. here on in context, and seem to want to take for granted the objections as though they are well founded. They are not, for reasons the context of the linked will make plain. Namely:
a: functional specificity of configurations is objectively real and routinely observable, just think about finding a spare part of of what happened when that comma sent a NASA rocket veering off path and forced a self destruct. b: Complex, functionally specific organisation with associated information is equally real and observable, as we can see from how AutoCAD etc in effect create node and arc meshes to describe objects and functional networks through sets of bit strings. c: As the AutoCAD file size number also shows, such are measurable in bits. d: In addition, we can show that on the gamut of the solar system, the maximum sample that can be taken with atomic resources using the fastest ionic chemical reaction rates as clock tick, is as one straw to a cubical haystack 1,000 LY across, i.e about as thick as our galaxy. e: So, sampling theory tells us that — even if such were superposed on our galaxy — we have no right to expect anything from such a sample but straw. This is the needle in the haystack, on steroids. f: In addition, we may quantify this threshold, once we can observe functional specificity being present (and using S = 1/0 as a dummy variable, default 0 but 1 if FS is objectively present) and produce a Chi_500 metric, as the linked shows, where measured info content is I: Chi_500 = I*S – 500, bits beyond the solar system threshold of sufficient complexity to be FSCO/I
I therefore must say to you that beyond a certain point, sustaining a dismissive distortion in the face of plainly adequate correction becomes willful distortion of truth. 2: Nor can we observe causes This is at best nearly meaningless pedantry. Consider a dropped heavy object, where reliably we see that it falls at 9.8 N/kg. This is mechanical necessity in action. Similarly, if the object is a fair die, it will reliably then tumble and come to rest with uppermost sides from the set {1, 2, . . . 6} at probability 1/6 per face. Much the same would obtain for a 2-sided die, i.e a coin, where the H/T would be with probability 1/2 apiece. This is chance based high contingency. Now, if we had a string of 504 such coins in a slotted tray, we could find the coins in states from TT . . . T to HH . . . H by chance and/or by choice. And as a simple case, if we were to see the coins arranged so as to show the first 72 letters of this post, in order, we would with all but certainty, have excellent reason to infer that the best and empirically warranted explanation of such was intelligently directed organising work (IDOW), AKA design. It is quite reasonable to say of such that we may see the relevant causes in action, and that we can trace them from empirically testable, reliable signs. For instance, due to the binomial distribution for 500 coins [~ 5.24*10^151 possibilities], the at random tosses would be overwhelmingly near 50:50 H/T in no particular order. The bare possibility of getting a special arrangement as above, would be so remote on the gamut of our solar system’s atomic and temporal resources that we can dismiss the possibility of this by chance as all but impossible. Such is, reliably, empirically unobservable. An unwinnable lottery. It is thus reasonable to say that we observe causes in action, mechanical necessity leading to natural regularities, chance contingency to stochastic distributions, and choice contingency often leading to things such as FSCO/I. 3: you’re ignoring what we do know about designers: namely our best current explanation for how all knowledge is created. Strawman. 4: I’m pointing out the ambiguity of the terms “relationship” and arbitrary” in the first premise of UB’s argument. Strawman. Context makes the meaning abundantly clear. 5: Entirely new cells are constructed when they divide. This includes all of the components of the system you are referring to. Unless a designer is intervening to build tRNA when a cell divides, the knowledge of how to construct all of them is found in the genome. Strawman, off a red herring. What you are distorting is the reported, easily shown fact:
KF: the AA’s loaded on the tRNA’s that key to the mRNA codons, are loaded on a standard CCA end. They are INFORMATIONALLY loaded based on the config of the particular tRNA, by special loading enzymes. That is the connexion between the codon triplet and the AA added to the protein chain is informational not driven by deterministic chemical forces.
You are ducking the established fact of INFORMATION ENCODED IN MATERIAL MEDIA AND ALSO USED TO DECIDE WHICH aa GOES ON WHAT tRNA, TO MATCH TO CODON IN THE RIBOSOME, SO CORRECTLY CHAINING A PROTEIN THROUGH TRANSLATION FROM THE RNA CODE. And, since that process has in it oodles of FSCO/I, e.g. the genome starts at about 100 – 1,000 bits of digitally stored info, we know the best explanation for such FSCO/I per reliable sign, IDOW, or design. Life’s origin is on design, the onward replication and reproduction from generation to generation carries forward what was built in. 6: Unless a designer is intervening to build tRNA when a cell divides, the knowledge of how to construct all of them is found in the genome. So, the question is, how was this knowledge created? Strawman, again. Cf just above. 7: where did I imply this? [inability to provide a counter instance to FSCO/I reliably being produced in our observation by IDOW] This is evident from your tactics, as was pointed out in 306, above but neatly omitted:
CR, 299: it is also parochial in that it implicitly includes the idea that knowledge / information must be justified by some ultimate source. How do you justify whatever arbiter defines this relationship? And how do you justify that, etc? Will you respond with a serious question this time? KF (pardon typo): In effect5 you grudgingly imply that you cannot provide an actual case of coded, functionally specific information of 500 + bits coming about by known forces of chance and necessity without intelligent direction. That is obvious for if you had a case you would not be going into such convolutions but would triumphantly trot it out. But the Canali on mars failed, Weasel failed, GA’s failed, and the Youtube vid on how a clock could evolve from gears and pendulums failed too, etc. So you cannot bring forth an actual case to make your point. To brazen it out, you want to demand the right to suggest without evidence that chance and necessity can and do on the gamut of accessible resources, create FSCO/I. Sorry, a demonstrated source — design — is an obviously superior explanation to something that has no such base. FYI, there is no question-begging circle on what “must” be the source of knowledge, codes, intelligent messages etc, WE HAVE OBSERVATIONS, abundant and unexceptioned observations, that show that FSCO/I comes from design. So, you ate going up against an empirically abundantly justified induction. And your trick is to assert question-begging.
The strawman tactic is evident. 8: Pointing out that justification is impossible is not “going into such convolutions”. It’s a criticism of one’s form of epistemology and the impact it would have on their conclusions. Of course, warrant per observation, consistent pattern seen in such observations and reasonable inference to best explanation, is sufficient for all practical and responsible purposes. But to the hyperskeptic such as CR, such can be simply swept away by using dismissive words. When it is suitable. “Justification [--> more accurately, warrant] is impossible” of course cannot be consistently lived by. It refutes itself. Let me give a case of warrant to undeniably certain truth. Statement E: Error exists. This is obviously so per general observation and experience, but it is also an undeniably true claim. To try to deny E at once instantiates it, as either E or else its denial NOT-E must be false. So, even to deny E ends up supporting it. Similarly, it is a reliable induction that a dropped heavy object near earth falls more or less towards the centre thereof, with initial acceleration 9.8 N/kg. Likewise it can be warranted that reasonably pure water at sea level boils at about 100 degrees C, under standard atmosphere conditions. Similarly, there is a certain body of Pt alloy near Paris that is the standard of mass, the kilogram. One metre is the distance light travels in about 3 ns. (There is a more exact time, used in the current formal definition of the metre. It is also demonstrable that this definition is a successor to one in terms of a certain number of wavelengths of light, thence onwards the distance between two scratch-marks on a certain bar of Pt alloy, and thence onward per the original definition, a fraction of the distance from the Earth’s pole to the equator through Paris. That is an example of historical warrant that produces morally certain knowledge.) 9: Where does your conception of human knowledge differ from the conception I outlined? Please be specific. Here’s your chance to show that my assessment is wrong, by pointing out how your view differs, in detail. Strawman, that pretends that there was no answer to the assertions, at length previously. The above list of cases that can be fairly easily fleshed out, should suffice to show, again, why CR’s claims are utterly wrong-headed. 10: justification is impossible. Of course, being open to criticism, please feel free to point out how it’s possible, in practice . . . . How is it an error to point out your argument is parodical, in that it completely ignores other forms of epistemology? What sort of acknowledgment would “correct” or “amend” this? Should I deny they are well formed or that they exist as alternatives? What actually seems to be going on is that you cannot recognize your specific conception of human knowledge as an idea that would be subject to criticism. Specifically, your response so far seem to be that “everyone knows we use induction”, as if you accept it uncritically and that it’s a taboo to even question it. Yet, you haven’t actually presented a “principle of induction” that works in practice. The strawman tactics continue, again and again. Onlookers who wish to see more of why this is completely a caricature, may wish to follow the original post and exchanges in this thread [--> i.e. here in this thread above]. (CR is trying to rebut the force of Q 1 of 18. He manifestly fails but is unwilling to acknowledge adequate and repeated correction.) >> ____________ I trust this should suffice to show what is going on. It seems that objectors to design thought cannot seriously and cogently answer the 18 Q's. KFkairosfocus
September 9, 2012
September
09
Sep
9
09
2012
11:55 PM
11
11
55
PM
PDT
KF: [Q] 2: Is there such a thing as reasonable inductive generalisation that can identify reliable empirical signs of causal factors that may act on objects, systems, processes or phenomena etc., including (a) mechanical necessity leading to low contingency natural regularity, (b) chance contingency leading to stochastic distributions of outcomes and (c) choice contingency showing itself by certain commonly seen traces familiar from our routine experiences and observations of design? If not, why not? No, I've I've outlined objections to inductivism above. I'm assuming you're still busy and have yet to respond to them. KF: Being busy for the moment, let me take up your opening, as a slice of the cake that has in it the main ingredients (and yes, this is inductive): I'd suggest you apply that to my above comment that specially contrasts Critical Rationalism and inductivism. For example…
[b --> Strawman summary. The pivot is not conjectures on the past but first observation of traces of the past or of things too remote to be directly observed.
Again we cannot observe causes. As such, how do you know where to look without first conjecturing a theory? From my above comment…
… inductivism doesn’t tell us what we should observe or why those observations are relevant because all we have are observations at the outset. Until we devise a test, we do not know what observations to make. And without at least one theory, we have no way to devise a test that might result in observations that conflict with that particular theory. If initial observations did tell us what test would actually conflict with a theory, there would be no need to devise a test in the first place.
CR: It’s a bad explanation as shallow and easily varied. KF: [o --> Mere harrumphing.] If microscopes return accurate results because they represent hard to vary adaptations of matter, why would we expect a shallow and easily varied explanation to be closer to the truth? Why couldn't we get closer? Can you easily vary the design of a microscope (the explantation as to how it works) and expect it to give accurate results? For example, can your designer be easily varied and still perform the purpose of designing objects just as well? If not, then why would you think the shallow and easily varied explanation of "an abstract designer with no defined limitations" is a good explanation that actually brings us closer to truth? Why is your designer an exception? KF: Re Locke, why did you go off on a red herring tangent to a strawman on tabula rasa when what I cited with approval from Locke was in front of you. This, from Intro to essay on human understanding: Because locke's conception of human knowledge was justiicationist in nature. In addition, Lock's views are represent a pre-scientific perspective of human knowledge…. From a quote on another thread….
All logically conceivable transformations of matter can be classified in the following three ways: transformations that are prohibited by the laws of physics, spontaneous transformations (such as the formation of stars) or transformations which are possible when the requisite knowledge of how to perform them are present. Every conceivable transformation of matter is either impossible because of the laws of physics or achievable if the right knowledge is present. This dichotomy is entailed in the scientific world view. If there was some transformation of matter that was not possible regardless of how much knowledge was brought to bare, this would be a testable regularity in nature. That is, we would predict whenever that transformation was attempted, it would fail to occur. This itself would be a law of physics, which would be a contradiction. Furthermore, if we really do reside in a finite bubble of explicably, which exists in an island in a sea of of inexplicability, the inside of this bubble cannot be explicable either. This is because the inside is supposedly dependent what occurs in this inexplicable realm. Any assumption that the world is inexplicable leads to bad explanations. That is, no theory about what exists beyond this bubble can be any better than “Zeus rules” there. And, given the dependency above (this realm supposedly effects us), this also means there can be no better expiation that “Zeus rules” inside this bubble as well. In other words, our everyday experience in this bubble would only appear explicable if we carefully refrain from asking specific questions. Note this bares a strong resemblance to a pre-scientific perspective with its distinction between an Earth designed for human beings and a heaven that is beyond human comprehension.
Again, it's unclear how a "abstract designer with no defined limitation" is a good explanation unless you assume we cannot get closer to truth.critical rationalist
September 6, 2012
September
09
Sep
6
06
2012
10:51 AM
10
10
51
AM
PDT
CR: Re Locke, why did you go off on a red herring tangent to a strawman on tabula rasa when what I cited with approval from Locke was in front of you. This, from Intro to essay on human understanding:
Men have reason to be well satisfied with what God hath thought fit for them, since he hath given them (as St. Peter says [NB: i.e. 2 Pet 1:2 - 4]) pana pros zoen kaieusebeian, whatsoever is necessary for the conveniences of life and information of virtue; and has put within the reach of their discovery, the comfortable provision for this life, and the way that leads to a better. How short soever their knowledge may come of an universal or perfect comprehension of whatsoever is, it yet secures their great concernments [Prov 1: 1 - 7], that they have light enough to lead them to the knowledge of their Maker, and the sight of their own duties [cf Rom 1 - 2 & 13, Ac 17, Jn 3:19 - 21, Eph 4:17 - 24, Isaiah 5:18 & 20 - 21, Jer. 2:13, Titus 2:11 - 14 etc, etc]. Men may find matter sufficient to busy their heads, and employ their hands with variety, delight, and satisfaction, if they will not boldly quarrel with their own constitution, and throw away the blessings their hands are filled with, because they are not big enough to grasp everything . . . It will be no excuse to an idle and untoward servant [Matt 24:42 - 51], who would not attend his business by candle light, to plead that he had not broad sunshine. The Candle that is set up in us [Prov 20:27] shines bright enough for all our purposes . . . If we will disbelieve everything, because we cannot certainly know all things, we shall do muchwhat as wisely as he who would not use his legs, but sit still and perish, because he had no wings to fly. [Essay on Hum U/stdg, Intro sect 5; Text references added to document the sources of Locke's allusions and citations.]
That sounds like some common good sense to me, and well worth heeding still. Gotta go. KFkairosfocus
September 4, 2012
September
09
Sep
4
04
2012
03:56 PM
3
03
56
PM
PDT
CR: I have a national sci edu crisis to deal with, so I have to be selective. I see:
in the above examples, the knowledge of how to tell time via the sun is in us and our sundials, not the sun. But this knowledge is embedded in the watch, as it is in biological organisms. So, the question is, how did the knowledge end up embedded in either of these things? How does ID explain it?
Designs are not about "adaptations" -- they are about matched components integrated to form a functional whole in accordance with a "wiring diagram." This well known pattern -- how you strain to avoid talking in the terms of functionally specific, complex organisation and associated information -- is such that components are maximally unlikely to arrive at effective configs in the space of possibilities by blind chance, mechanical necessity or specific cases of that such as exaptation. DESIGN, as is abundantly exemplified in the world around us, easily explains that pattern. The specification of components, orientations, magnitudes, adjustments etc to function in intended ways in given environments. Followed by skilled assembly and adjustment, often with debugging to eliminate the almost inevitable bugs. This is a commonplace, and it is the only empirically observed cause of systems with FSCO/I, where we actually see the origin. (Self replication in the relevant sense is not an origin, it is an additional, highly complex function to be explained..) All of this is well known. KFkairosfocus
September 4, 2012
September
09
Sep
4
04
2012
03:39 PM
3
03
39
PM
PDT
The recent case of a Youtube video of how a clock spontaneously evolves from gears and levers serving as pendulums and pointers is a classic, as the person obviously does not understand that getting precision gears to mesh and to be backed at precisely controlled points is a serious design and construction task.
I believe I also devoted some attention to this purported example of "darwinian evolution" in posts here at UD.Mung
September 4, 2012
September
09
Sep
4
04
2012
01:26 PM
1
01
26
PM
PDT
CR: Being busy for the moment, let me take up your opening, as a slice of the cake that has in it the main ingredients (and yes, this is inductive):
What we do
[ a --> in dealing with scientific reconstructions of the unobserved past, which applies to forensics (as applied science), to geology, to astrophysics, and of course to evolutionary biology, etc. That is the question is not only faced by design thinkers, so one must be balanced rather than selectively hyperskeptical . . . ]
is conjecture explanatory theories about unseen causes which must have logical consequences for the present state of the system.
[b --> Strawman summary. The pivot is not conjectures on the past but first observation of traces of the past or of things too remote to be directly observed. c --> The challenge here being that we wish to have a credible understanding of the course of the past and how it led to the present marked by those traces [e.g. why oxbow lakes and m4eanders in a river valley and why a wide flood plain with walls beyond, or else of the behaviour of the remote object and how it gives rise to say a spectrum with Fraunhoffer lines red shifted by a certain amount. And many more like that. This is a real problem, and one that has real implications for the methods of science and underlying inductive reasoning, induction being that form of logical argumentation where evidence and reasoning provide support for conclusions that may be substantial, but not proof that is beyond all dispute rooted in axioms acceptable to all. d --> The logic applied being inference to best empirically supported explanation, in light of setting up and/or observing the course of relevant forces and factors in the present, that give rise to patterns of results that are comparable to the traces. e --> In cases where we have patterns of consequences that are shown to be characteristic of given causal forces, these are taken as credible signs of those forces at work. (I think you should at the very least work your way through the discussion here with us, on the deer track example, and also the signature on a checque example. Notice, in both cases fraud is possible, but there is a heuristic that allows us to accept that once the signs are there, and once there are no further evident signs of fraud or ambiguity, then it is reasonable to infer to the obvious conclusions. Of course there are sometimes problems with the actual reliability of signs, as may be seen in the discussion here that implicitly contrasts the cosmological case and the geodating case. ironically, there is a tendency to accept signs that are not particularly reliable on testing, because they support a given dominant school of thought.) f --> Thus, per inference to best explanation in such cases, we may identify most credible causes.]
These theories are then empirically criticized.
[g --> More correctly, inferences to best explanation in science are open to development and correction, on further evidence. This is consistent with how scientific knowledge is provisional.]
However, a mere theory of an abstract designer that has no defined limitations there can be no necessary consequences for the present day system.
[h --> Strawman caricature. Design theory is about detection of design on empirically warranted signs as material causal factor, it is not a theory about a designer or a cluster of designers. THAT TWEREDUN comes before WHO DUNIT. i --> Science, ostensibly, is meant to be accurate to the world and to provide warrant on observations that grounds confidence in that accuracy, so it matters a great deal whether a theory's claims are potentially truth bearing. j --> Thus, in a case where we have no credible observations that something like FSCO/I is in our knowledge produced by blind chance and mechanical necessity AND we also have a needle in the haystack/infinite monkeys analysis that on grounds similar to those of the statistical form of the 2nd law of thermodynamics, a proposed explanation of FSCO/I that insists that it MUST have come about by forces of chance and necessity is in serious trouble with versimilitude. k --> That proponents of such theories then set out to gerrymander definitions of science and its methods in the teeth of history, epistemology and logic alike, then points to a point where serious reform is needed in science. l --> That is why the evidence that the living cell is a gated, encapsulated, metabolic automaton involving a von Neumann kinematic self replication facility that uses digitally coded tapes and associated informational protocols in a context of a step by step algorithmic process, i.e. it is FSCO/I rich, is material to the identifying of the empirically credible cause of the living cell. And surely, that is a significant issue in science today. m --> Where we know the only and routinely empirically observed cause of algorithms, digital codes, implementing machines correctly arranged to work together, and the functionally specific complex information. Namely, design. n --> So it is both reasonable and momentous for our understandings of the origin of life and later on of major body plans that these are FSCO/I rich in a context where on evidence such FSCO/I is a strong sign of design as causal process.]
It’s a bad explanation as shallow and easily varied.
[o --> Mere harrumphing.]
In short, despite repeated well warranted correction, you keep on going off the rails at the outset. You need to think about what it is that keeps you locked in a circle of basic errors, even when you have ample opportunity to correct same. KFkairosfocus
September 4, 2012
September
09
Sep
4
04
2012
05:35 AM
5
05
35
AM
PDT
CR: Let's boil her down:
[Q] 2: Is there such a thing as reasonable inductive generalisation that can identify reliable empirical signs of causal factors that may act on objects, systems, processes or phenomena etc., including (a) mechanical necessity leading to low contingency natural regularity, (b) chance contingency leading to stochastic distributions of outcomes and (c) choice contingency showing itself by certain commonly seen traces familiar from our routine experiences and observations of design? If not, why not?
KFkairosfocus
September 4, 2012
September
09
Sep
4
04
2012
05:02 AM
5
05
02
AM
PDT
This leads us to the question of what constitutes "the appearance of design", which is a reflection of our acceptance of observations from microscopes (the existence of long chains of hard to vary explanations for those observations). It was William Paley who noted some objects not only can serve a purpose but there are objects which are *adapted* to a purpose. For example, if you slightly altered the design of a watch (or a microscope) it would serve the purpose of keeping time (or magnifying samples) less well, or not even at all. On the other hand, we can use the sun to keep time, even though it would serve that purpose equally well if its features were slightly or even massively modified. Just as we adapt the earth's raw materials to serve a purpose, we also find uses for the sun it was never design or adapted to provided. So, merely being useful for a purpose, without being hard to vary and retaining that ability, does not reflect the appearance of design. IOW, good designs are hard to vary. This is a reflection of our long chain of independent, hard to vary explanations for how microscopes work. Adaptations represent transformations of matter. In the case of a microscope, raw materials are adapted into glass and metal, which are adapted into lenses, gears and frames. These components are adapted into a particular configuration in a particular order. If you varied these adaptations slightly the microscope would not serve the purpose of magnifying samples as well, or not even at all. These are two sides of the same coin, so to speak. Again, it's unclear what adding "and some designer wanted it that way" brings to the explanation or how it is even desirable. Also, in the above examples, the knowledge of how to tell time via the sun is in us and our sundials, not the sun. But this knowledge is embedded in the watch, as it is in biological organisms. So, the question is, how did the knowledge end up embedded in either of these things? How does ID explain it?critical rationalist
September 3, 2012
September
09
Sep
3
03
2012
11:40 AM
11
11
40
AM
PDT
KF, Thanks for your reply. I agree this is a productive discussion, which has been helpful for me to understand your position as well. In that spirit, I'll attempt to further clarify the difference between these two forms of epistemology. Critical Rationalism - We notice a problem. - We propose solutions to the problem - Since proposed solutions are essentally guesses about what is out there in reality, we… - Criticize the theory for internal consistency. Solutions that are internally inconsistent are discarded. - Criticize the theory by taking it seriously, in that we assume it's true in reality and that all (empirical) observations should conform to them, *for the purpose of rational criticism*. "All observations" reflects all of our current, best solutions to other problems, which are themselves conjecture that have survived criticism. - This process continues until only one proposed solution is left, rather than positively supporting one particular theory. - The process starts all over again we notice another problem, such as new observations that conflict with our remaining proposed solution. Observations are themselves based on theories. So, when a new observation conflicts with a deep, hard to vary explanation, one form of criticism is to criticize the theory behind the new observations by conjecturing a theory why those observations might be wrong, then criticizing that theory as well. An example of this is OPERA's observations of faster than light neutrinos, which conflicted with Einstein's special relativity (SR). These results didn't tell us anything, one way or the other, as we had yet to devise a good explanation for the observations, such as we have for microscopes. In the absence of a good explanation, we had no way to criticize these observations. (For example, in the case of microscopes, the samples could have been prepared incorrectly or mislabeled. This is part of the hard to vary explanation as to why microscopes tell us something about reality.) So, observations are neutral (in the sense you're referring to) without good explanations. As such, they could not falsify SR. Eventually OPERA did come up with an explanation for the observations: an improperly attached fiber optic cable and a clock oscillator ticking to fast. SR lives on to be criticized another day. If one assumes microscopes return accurate results merely because "some abstract designer with no defined limitations wants them to", we have no way of criticizing the resulting observations, as the explanation for the results could be easily varied. For example, you might put the wrong sample under the lens or replace the lens with a penny, but an abstract designer with no limitations could still display the right sample because "thats what the designer wanted". Nor is it clear how appending,"because some abstract designer with no defined limitations wanted them to play those roles" to our current, long chain of independently formed, hard to vary explanations as to why microscopes return accurate results, adds to the explanation or is even desirable in regards to actually solving the problem. For example, would you start discarding observations from microscopes if this addition was absent, but the long chain of independently formed, hard to vary explanations remained? Would this stop us from making progress. Inductivism - We start out with observations - We then use those observations to devise a theory - We then test those observations with additional observations to confirm the theory or make it more probable However, theories do not follow from evidence. At all. Scientific theories explain the seen using the seen. And the unseen doesn't "resemble" the seen any more than falling apples and orbiting planets resemble the curvature of space-time. Are dinosaurs merely an interpretation of our best explanation of fossils? Or are they *the* explanation for fossils? After all, there are an infinite number of rival interpretations that accept the same empirical observations, yet suggest that dinosaurs never existed millions of years ago. For example, there is the rival interpretation that fossils only come into existence when they are consciously observed. Therefore, fossils are no older than human beings. As such, they are not evidence of dinosaurs, but evidence of acts of those particular observations. Another interpretation would be that dinosaurs are such weird animals that conventional logic simply doesn't apply to them. One could suggests It's meaningless to ask if dinosaurs were real or just a useful fiction to explain fossils - which is an example of instrumentalism. Not to mention the rival interpretation that an abstract designer with no limitations chose to create the world we observe 30 days ago. Therefore, dinosaurs couldn't be the explanation for fossils because they didn't exist at the time. Yet, we do not say that dinosaurs are merely an interpretation of our best explanation of fossils, they *are* the explanation for fossils. And this explanation is primarily about dinosaurs, not fossils. So, it's in this sense that science isn't primarily about "things you can see". (I'd also note that the above "rival interpretations" represent general-purpose ways of denying anything, but I'll save that for another comment.) We seem to agree observations cannot be used to conform theories. However, you do seem think that observations can make a theory more probable. But this assumption is highly parochial, as it doesn't take into account the different kinds of unknowability. The first kind of unknowability are scenarios where the outcome is completely random and all possible outcomes are known. An example of this is Russian Roulette. As long as you know all of the possible outcomes, we can use probability to make choices about it. For example, if for some horrible reason, one had to choose between different versions of Russian Roulette with specific yet variable number of chambers, bullets and trigger pulls, one could use game theory to determine which variation would be most favorable. On the other hand, any piece of evidence is compatible with many theories (see above) This includes an infinite number of theories that have yet to be proposed. You cannot assign probabilities to un-conceived theories, because those probabilities would be based on the details of a yet to be conceived theory. In addition, scenarios that depend on the creation of knowledge represent a different kind of unknowability, despite being deterministic. For example, people in 1900 didn't consider nuclear power or the internet unlikely. They didn't conceive of them at all. As such, it's unclear how they could have factored their impact into some sort of probability calculation about the future. As such, in the face of this kind of unknowability, probability is invalid as a means of criticizing explanations, despite what our intuition might tell us. Furthermore, inductivism doesn't tell us what we should observe or why those observations are relevant because all we have are observations at the outset. Until we devise a test, we do not know what observations to make. And without at least one theory, we have no way to devise a test that might result in observations that conflict with that particular theory. If initial observations did tell us what test would actually conflict with a theory, there would be no need to devise a test in the first place. For example, the evidence that collaborated Newton's laws of motion has been falling on the earth's surface for billions of years, which is far longer than the entirety of human inhabitance. Yet, we only got around to testing them about 300 years ago after Newton conjectured his theory. As such, it's not evidence that is scarce, but good explanations for that evidence. And we can say the same about all other phenomena. So, we should look for explanations, not justification. Good explanations solve problems and allow us to make progress. When criticizing theories, we look for observations that can be better explained by one theory, rather than another. And we take into account all of our other current, best explanations for the purpose of criticism. Arguments that do not take them into account are parochial - which is narrow in scope. Most relevant in our discussion here, the objection that "idea X is not justified" is a bad criticism because it applies to all ideas.critical rationalist
September 3, 2012
September
09
Sep
3
03
2012
11:37 AM
11
11
37
AM
PDT
What we do is conjecture explanatory theories about unseen causes which must have logical consequences for the present state of the system. These theories are then empirically criticized. However, a mere theory of an abstract designer that has no defined limitations there can be no necessary consequences for the present day system. It's a bad explanation as shallow and easily varied. From What Did Karl Popper Really Say About Evolution?
In an earlier work, Popper discussed the historical sciences in which the scientific method of theoretical sciences is used:
This view is perfectly compatible with the analysis of scientific method, and especially of causal explanation given in the preceding section. The situation is simply this: while the theoretical sciences are mainly interested in finding and testing universal laws, the historical sciences take all kinds of universal laws for granted and are mainly interested in finding and testing singular statements. [Popper, 1957, p. 143ff]
What Popper calls the historical sciences do not make predictions about long past unique events (postdictions), which obviously would not be testable. (Several recent authors—including Stephen Jay Gould in Discover, July 1982—make this mistake.) These sciences make hypotheses involving past events which must predict (that is, have logical consequences) for the present state of the system in question. Here the testing procedure takes for granted the general laws and theories and is testing the specific conditions (or initial conditions, as Popper usually calls them) that held for the system. A scientist, on the basis of much comparative anatomy and physiology, might hypothesize that, in the distant past, mammals evolved from reptiles. This would have testable consequences for the present state of the system (earth's surface with the geological strata in it and the animal and plant species living on it) in the form of reptile-mammal transition fossils that should exist, in addition to other necessary features of the DNA, developmental systems, and so forth, of the present-day reptiles and mammals.
However, this does not mean evolutionary theory is *positively* supported by these observations. Rather, it survives empirical criticism. The explanation behind Darwinism is that the knowledge of how to build biological adaptations was created via a form of conjecture and refutation. It's part of a universal explanation for how knowledge grows. Specifically, conjecture, in the form of genetic variation random to a specific problem to solve, and conjecture, in the form of natural selection. This is a hard to vary explanation in that it would have necessary consequences for the current state of the system, which we should be able to empirically observe. One necessary consequence is that organisms should appear in the order of least to most complex. In addition, organisms should appear over time, rather than appearing all at once. If organisms appeared all at once or in the order of most complex to least complex, there is no way to vary Darwinism to explain it. Darwinists have no where to go. We can say the same regarding organisms born with new, complex adaptations for which there were no precursors in the parents or complex adaptation that has survival value today, but was not favored by selection pressure in it's ancestry (such as bears with the ability to detect and use internet weather forecasts as a means to determine when to hibernate) In all of these cases, some completely different explanatory theory would be needed. On the other hand, intelligent design theory refers to an abstract designer with no defined limitations. If we assume only this is true, for the purpose of criticism, what would be the necessary consequences for the current state of the system? An abstract designer with no defined limitations could have created organisms in any order, all at once or over time. It could have also created features for which there were no precursors or has survival value today, but was not favored by selection in their ancestry. What else would refute Darwinism's underlying explanation? Evidence that the knowledge of how to build organisms came into existence in a different way (which was also implied in the above). For example, if an organism was observed to undergo only, or mainly, favorable mutations, as predicted by Lamarckism or spontaneous generation, then a fundamentally new explanation for that knowledge would be required. KF: Locke’s reply is biting (and goes to another explaining concept that many in our day are so quick to deride, but should rethink their views), in his opening introductory remarks in his essay on human understanding, section 5. I cite this because it is apt and anticipated Hume by decades in a work he should have taken more seriously: Locke was an early empiricist and, as you pointed out, far from secular. From the Wikipedia entry on empiricism.
The notion of tabula rasa ("clean slate" or "blank tablet") connotes a view of mind as an originally blank or empty recorder (Locke used the words "white paper") on which experience leaves marks. This denies that humans have innate ideas. The image dates back to Aristotle;
What the mind (nous) thinks must be in it in the same sense as letters are on a tablet (grammateion) which bears no actual writing (grammenon); this is just what happens in the case of the mind. (Aristotle, On the Soul, 3.4.430a1).
This is naive inductivism.critical rationalist
September 3, 2012
September
09
Sep
3
03
2012
11:36 AM
11
11
36
AM
PDT
F/N: Any iterative hill-climbing system that exploits peakiness of fitness functions operates inside islands of function. The design theory issue is to cross seas of non-function to find isolated islands of function by chance and necessity, which will be instantly understandable to someone who has had to find the just-right part for a car. a clip from IOSE:
Before we even take up details, we need to pause to underscore the idea that when a set of matching components must be arranged so they can work together to carry out a task or function, this strongly constrains both the choice of individual parts and how they must be arranged to fit together. A jigsaw puzzle is a good case in point. So is a car engine -- as anyone who has had to hunt down a specific, hard to find part will know. So are the statements in a computer program -- there was once a NASA rocket that veered off course on launch and had to be destroyed by triggering the self-destruct because of -- I think it was -- a misplaced comma. The letters and words in this paragraph are like that too. That's why (at first, simple level) we can usually quite easily tell the difference between: A: An orderly, periodic, meaninglessly repetitive sequence: FFFFFFFFFF . . . B: Aperiodic, evidently random, equally meaningless text: y8ivgdfdihgdftrs . . . C: Aperiodic, but recognisably meaningfully organised sequences of characters: such as this sequence of letters . . . In short, to be meaningful or functional, a correct set of core components have to match and must be properly arranged, and while there may be some room to vary, it is not true that just any part popped in in any number of ways can fit in . . .
And of course, this has been pointed out over and over, only to meet willful refusal to seriously discuss. Guess what, this is not a game where the objector gets to shoot off endlessly repeated objections in the teeth of what is in the end a very simple issue of inference on reliable sign. Hence, the significance of Q1 in the OP above. We have an empirically reliable tested sign of design as material causal factor. We have life forms that from the simplest reasonable cell brim over with it, starting with metabolism, protein manufacture, direct coded digital information storage and von Neumann kinematic self-replication. Absent a priori ideological commitments -- and those are OFFICIAL statements from the US NAS and NSTA Board, the no-brainer overwhelmingly warranted conclusion is that cell based life is designed. And, as was noted, we have reasonable confidence that a molecular nanotech lab some generations beyond Venter could do it. We also have a fine tuned cosmos to account for, and that too points to design as best explanation. Those who draw that conclusion are not going to go away and are not going to be impressed by the sort of power play dirty tactics being routinely used by objectors. And, sooner or later, the dirty objector tactics are going to backfire bigtime. KFkairosfocus
September 2, 2012
September
09
Sep
2
02
2012
04:39 PM
4
04
39
PM
PDT
Joe: It seems Mathgrrl/Patrick -- the latter having confessed to using the former [which properly belongs to a Calculus professor out there . . . ] as a sockpuppet -- is forgetting that we at UD keep records. First, here is the collection of links on the sorry record of the MG you have not defined "rigorously" talking point. Notice, how Graham -- an obvious confederate -- never showed up to respond to the answer. Also note how I had to remark on the problem of design theory objectors resorting to "insistently repeated misrepresentation maintained by drumbeat repetition and unresponsiveness in the teeth of adequate correction." This is more of the same from the same source, and it shows how, no matter how solidly a talking point has been answered, it will be recycled endlessly to those naive enough to take these advocates at face value. (That's for anyone who wants to dig up the devastating details -- for MG/P; including the point where he tried to dismiss a logarithmic reduction as a probability calculation, which was the point when I knew for absolute sure I was dealing with no mathematician. For those who simply want to see how a reasonable metric can be set up, cf. here for a summary thread, with here as an onward drawing together of the threads. Remember, once we work under the gamut of the solar system and at fastest chemical reaction rates as clock-tick, the fraction of states that a blind process based on chance + necessity could sample stands as ONE straw to a cubical haystack 1,000 light years on the side, about as thick as our galaxy. Per sampling theory -- i,e. FYI MG/P there is no necessity to calculate a detailed probability estimate -- and note the power set sampling frame challenge above -- the only reasonable expectation is that we will pick up from the overwhelming bulk: straw, never mind if the haystack were superposed on the galaxy. That's a needle in a haystack search problem on steroids.) For the rest of us, I clip from the IOSE introsummary page, on Dembski's initial quantifying CSI (in the onward context of why islands of function make sense when we deal with function dependent on multiple, well matched, mutually interacting component parts) and going to the Chi_500 metric that is useful in concrete situations:
WmAD, NFL p. 148: >>“The great myth of contemporary evolutionary biology is that the information needed to explain complex biological structures can be purchased without intelligence. My aim throughout this book is to dispel that myth . . . . Eigen and his colleagues must have something else in mind besides information simpliciter when they describe the origin of information as the central problem of biology. I submit that what they have in mind is specified complexity [[cf. here below], or what equivalently we have been calling in this Chapter Complex Specified information or CSI . . . . Biological specification always refers to function . . . In virtue of their function [[a living organism's subsystems] embody patterns that are objectively given and can be identified independently of the systems that embody them. Hence these systems are specified in the sense required by the complexity-specificity criterion . . . the specification can be cashed out in any number of ways [[through observing the requisites of functional organisation within the cell, or in organs and tissues or at the level of the organism as a whole] . . .” p. 144: [[Specified complexity can be defined:] “. . . since a universal probability bound of 1 [[chance] in 10^150 corresponds to a universal complexity bound of 500 bits of information, [[the cluster] (T, E) constitutes CSI because T [[ effectively the target hot zone in the field of possibilities] subsumes E [[ effectively the observed event from that field], T is detachable from E, and and T measures at least 500 bits of information . . . ”>> KF, IOSE I-S (onward links accessible at the IOSE site): >> xix: Later on (2005), Dembski provided a slightly more complex formula, that we can quote and simplify, showing that it boils down to a "bits from a one of interest [[in a wider field of possibilities] beyond a reasonable threshold of complexity" metric: [CHI] = – log2[10^120 ·[PHI]S(T)·P(T|H)]. [CHI] is "chi" and [PHI] is "phi" xx: To simplify and build a more "practical" mathematical model, we note that information theory researchers Shannon and Hartley showed us how to measure information by changing probability into a log measure that allows pieces of information to add up naturally: Ip = - log p, in bits if the base is 2. (That is where the now familiar unit, the bit, comes from. [--> NB: this is a standard metric for information]) xxi: So, since 10^120 ~ 2^398, we may "boil down" the Dembski metric using some algebra -- i.e. substituting and simplifying the three terms in order -- as log(p*q*r) = log(p) + log(q ) + log(r) and log(1/p) = – log (p): Chi = – log2(2^398 * D2 * p), in bits, and where also D2 = [PHI]S(T) Chi = Ip – (398 + K2), where now: log2 (D2 ) = K2 That is, chi is a metric of bits from a zone of interest, beyond a threshold of "sufficient complexity to not plausibly be the result of chance," (398 + K2). So, (a) since (398 + K2) tends to at most 500 bits on the gamut of our solar system [[our practical universe, for chemical interactions! ( . . . if you want , 1,000 bits would be a limit for the observable cosmos)] and (b) as we can define and introduce a dummy variable for specificity, S, where (c) S = 1 or 0 according as the observed configuration, E, is on objective analysis specific to a narrow and independently describable zone of interest, T: Chi = Ip*S – 500, in bits beyond a "complex enough" threshold * NB: If S = 0, this locks us at Chi = - 500; and, if Ip is less than 500 bits, Chi will be negative even if S is positive. * E.g.: a string of 501 coins tossed at random will have S = 0, but if the coins are arranged to spell out a message in English using the ASCII code [[notice independent specification of a narrow zone of possible configurations, T], Chi will -- unsurprisingly -- be positive. * Following the logic of the per aspect necessity vs chance vs design causal factor explanatory filter, the default value of S is 0, i.e. it is assumed that blind chance and/or mechanical necessity are adequate to explain a phenomenon of interest. * S goes to 1 when we have objective grounds -- to be explained case by case -- to assign that value. * That is, we need to justify why we think the observed cases E come from a narrow zone of interest, T, that is independently describable, not just a list of members E1, E2, E3 . . . ; in short, we must have a reasonable criterion that allows us to build or recognise cases Ei from T, without resorting to an arbitrary list. * A string at random is a list with one member, but if we pick it as a password, it is now a zone with one member. (Where also, a lottery, is a sort of inverse password game where we pay for the privilege; and where the complexity has to be carefully managed to make it winnable. ) * An obvious example of such a zone T, is code symbol strings of a given length that work in a programme or communicate meaningful statements in a language based on its grammar, vocabulary etc. This paragraph is a case in point, which can be contrasted with typical random strings ( . . . 68gsdesnmyw . . . ) or repetitive ones ( . . . ftftftft . . . ); where we can also see by this case how such a case can enfold random and repetitive sub-strings. * Arguably -- and of course this is hotly disputed -- DNA protein and regulatory codes are another. Design theorists argue that the only observed adequate cause for such is a process of intelligently directed configuration, i.e. of design, so we are justified in taking such a case as a reliable sign of such a cause having been at work. (Thus, the sign then counts as evidence pointing to a perhaps otherwise unknown designer having been at work.) * So also, to overthrow the design inference, a valid counter example would be needed, a case where blind mechanical necessity and/or blind chance produces such functionally specific, complex information. (Points xiv - xvi above outline why that will be hard indeed to come up with. There are literally billions of cases where FSCI is observed to come from design.) xxii: So, we have some reason to suggest that if something, E, is based on specific information describable in a way that does not just quote E and requires at least 500 specific bits to store the specific information, then the most reasonable explanation for the cause of E is that it was designed. The metric may be directly applied to biological cases:
Using Durston’s Fits values -- functionally specific bits -- from his Table 1, to quantify I, so also accepting functionality on specific sequences as showing specificity giving S = 1, we may apply the simplified Chi_500 metric of bits beyond the threshold: RecA: 242 AA, 832 fits, Chi: 332 bits beyond SecY: 342 AA, 688 fits, Chi: 188 bits beyond Corona S2: 445 AA, 1285 fits, Chi: 785 bits beyond
xxiii: And, this raises the controversial question that biological examples such as DNA -- which in a living cell is much more complex than 500 bits -- may be designed to carry out particular functions in the cell and the wider organism. >>
So, long since, MG/P has been decisively answered, for any reasonable person interested in a fair conclusion. However, that is precisely what we are not dealing with in this case, on conclusive track record. We here see selective hyperskepticism leading to closed minded refusal to accept that there is another view that is legitimate and worth at least testing empirically. On this, you will see that there simply are no successful cases of FSCI emerging in a process that does not already start from built-in FSCI, i.e we are looking invariably at hill climbing within islands of function or the like. The recent case of a Youtube video of how a clock spontaneously evolves from gears and levers serving as pendulums and pointers is a classic, as the person obviously does not understand that getting precision gears to mesh and to be backed at precisely controlled points is a serious design and construction task. the first linked deals in details with Schneider's EV, courtesy Mung;s deconstruction, this being MG/P's main attempted example. And after all this time, it seems that MG/P has yet to seriously read the linked wiki article on modelling theory and quantification:
A mathematical model is a description of a system using mathematical concepts and language. The process of developing a mathematical model is termed mathematical modelling. Mathematical models are used not only in the natural sciences (such as physics, biology, earth science, meteorology) and engineering disciplines (e.g. computer science, artificial intelligence), but also in the social sciences (such as economics, psychology, sociology and political science); physicists, engineers, statisticians, operations research analysts and economists use mathematical models most extensively. A model may help to explain a system and to study the effects of different components, and to make predictions about behaviour. Mathematical models can take many forms, including but not limited to dynamical systems, statistical models, differential equations, or game theoretic models. These and other types of models can overlap, with a given model involving a variety of abstract structures. In general, mathematical models may include logical models, as far as logic is taken as a part of mathematics. In many cases, the quality of a scientific field depends on how well the mathematical models developed on the theoretical side agree with results of repeatable experiments. Lack of agreement between theoretical mathematical models and experimental measurements often leads to important advances as better theories are developed . . . . There are six basic groups of variables namely: decision variables, input variables, state variables, exogenous variables, random variables, and output variables. Since there can be many variables of each type, the variables are generally represented by vectors. Decision variables are sometimes known as independent variables. Exogenous variables are sometimes known as parameters or constants. The variables are not independent of each other as the state variables are dependent on the decision, input, random, and exogenous variables. Furthermore, the output variables are dependent on the state of the system (represented by the state variables). Objectives and constraints of the system and its users can be represented as functions of the output variables or state variables. The objective functions will depend on the perspective of the model's user. Depending on the context, an objective function is also known as an index of performance, as it is some measure of interest to the user. Although there is no limit to the number of objective functions and constraints a model can have, using or optimizing the model becomes more involved (computationally) as the number increases . . .
The bottomline is that as long as a model is reasonable and effective, it is sufficiently useful. It should be noted that there are billions of cases where the Chi_500 metric will rule that design is cause, and as these are observed cases, it is confirmed empirically reliable. There are no valid counter-examples where we separately know the origin and FSCI fails, attempted counters all end up being loaded with intelligent design overtly or implicitly, with EV a capital case in point. We have a right to treat FSCI as a sign of design, per induction. And since one of the objections was to the use of a threshold metric, let us observe that Einstein's Nobel Prize was largely based on the Photo effect, which is a threshold model. And, dummy variables taking binary states on observation of a relevant external factor that affects the issue materially are a commonplace. The serious onlooker will readily see why I have spoken of drumbeat repetition of long since adequately answered objections, demanding that that which has been shown adequately is not shown, and trotting out long since demonstrably specious objections as though they have merit; but in a poisonous atmosphere judgements will be clouded, which explains why the same objectors are ever so eager to unjustly smear design thinkers. Beyond a certain point, however, such becomes speaking with willful disregard to the truth, in hopes that what the objector knows or should know is false, will be taken as true. MG/P, sadly, has long since passed that point. KFkairosfocus
September 2, 2012
September
09
Sep
2
02
2012
04:21 PM
4
04
21
PM
PDT
You write a GA to allow for “built-in responses to environmental cues”.
Then your UPB is useless as a metric if you’re willing to accept that the environment can drive evolution to solve fitness problems as they arise.
1- That doesn't even follow, ie a non-sequitur 2- Yes, the UPB is useless in a design scenario and your equivocation is duly noted.Joe
September 2, 2012
September
09
Sep
2
02
2012
02:20 PM
2
02
20
PM
PDT
And a note to keiths: "That which can be asserted without evidence can be dismissed without evidence." Hitchens- IOW you have yet to post something to confront. You you still don't have any evidence that nylonase was the result of blind and undircted processes. That is because you don't have any evidence that living organisms are the result of blind and undirected processes. Ya see the OoL and its subsequent evolution are DIRECTLY linked. Designed at the OoL = designed to evolve. Not that that hasn't been explained to you guys thousands of times. So confront that- confront your strawmen and equivocations.Joe
September 2, 2012
September
09
Sep
2
02
2012
01:57 PM
1
01
57
PM
PDT
and it continues: Toronto:
How do you know what “specific functionality” will be required in a yet unknown future?
You write a GA to allow for "built-in responses to environmental cues". Patrick:
Did he actually post a rigorous mathematical definition of kairosfocus’ metric and some example calculations or did he just, as per usual, claim to have done so without providing any evidence?
Yes, Patrick. I provided you with just that many moons ago. Others have also. As I said it is more rigorous than anything your position has to offer. However that doesn't take much because your position still has nothing. And just because you can say "No, that is not a mathematically rigorous definition" that doesn't mean anything because it is not supported by an example from your position. IOW no one cares what you have to say. Carry on with your evidence-free rhetoric...Joe
September 2, 2012
September
09
Sep
2
02
2012
01:53 PM
1
01
53
PM
PDT
MathGrrl chimes in:
I remain appalled that kairosfocus can continue to use the terms FSCO/I and “Complex Specified Information” as if they had referents in reality when he has never been able to define them rigorously or show how to calculate them. Until he can do so, using those terms simply emphasizes his intellectual dishonesty.
And we are appalled that Patrick/ MathGrrl can still insist that FCSO/I has not been rigorously defined when in fact it is more rigorously defined and measured than anything his position has to offer. I take he is is still upset that his position has nothing.Joe
September 2, 2012
September
09
Sep
2
02
2012
12:51 PM
12
12
51
PM
PDT
PS: The OSA Optics Discovery Kit from Edmund Scientific. For exercises I use a plastic or wood metre stick stuck down using plasticine or the like and spring-loaded plastic clothes pins to hold up the lenses in the kit. A card can be put up with the same pins. This allows measurements. (The "riders" that hold round lenses in clips can also be used but the metre stick will need to be better supported.) And yes, I am advocating actually doing exercises, nothing teaches so well and nothing changes minds like actual experience. For the graphical exercises on lens theory, use good 1-cm grid graph paper on good smooth paper stock in non-repro blue, one of the most under-estimated of all scientific instruments. Get yourself a child's geometry set and a good 1-ft rule, and supplement with a nice student's bow compasses. Get a flexicurve for plotting curved graphs. For a good Sci Calc, xCalc is hard to beat for a free download, it is in effect a super HP=21 of old. If you don't like RPN, look for free for download sci calcs.kairosfocus
September 2, 2012
September
09
Sep
2
02
2012
04:10 AM
4
04
10
AM
PDT
CR: I wish to take up, from 81 above, your:
we should accept observations from microscopes because they represent good explanations for those observations. Good explanations are long chains of independent, hard to vary explanations. This is not the same as induction. I’m not a hyper-skepticist regarding microscopes.
I need to pause and say how important the exchange we are having is, as this is clarifying for record where the key issues lie. As such, I must express appreciation for your stating and defending your views. And, BTW, you have had another effect, on me as scientist-educator. You have caused me to revise my view on the significance of ray optics and linked prism, lens and mirror studies, in particular, their underestimated significance as a grounding physics and the key construct, an empirically grounded theory. Where of course, this is a major part of Newton's early work. His Opticks [cf. the Project Gutenberg scan formats, here] may be read with profit to this day. I find this note from Advertisement 1, interesting for our concerns:
o avoid being engaged in Disputes about these Matters, I have hitherto delayed the printing, and should still have delayed it, had not the Importunity of Friends prevailed upon me. If any other Papers writ on this Subject are got out of my Hands they are imperfect, and were perhaps written before I had tried all the Experiments here set down, and fully satisfied my self about the Laws of Refractions and Composition of Colours. I have here publish'd what I think proper to come abroad [--> out in public], wishing that it may not be translated into another Language without my Consent.
I will comment in steps of thought: 1 --> Newton (N) here shows how experimental investigations underpinned his work, and how inductively arrived at, empirically reliable laws of nature played a pivotal role in his work. 2 --> This, he expands in his rather long Query 31 (which is the last of the list of queries at the end of the book); which presents what is in essence the simple, generic "scientific method" taught in schools that despite limitations and qualifiers, is highly useful as an heuristic:
. . . To tell us that every Species of Things is endow'd with an occult specifick Quality by which it acts and produces manifest Effects, is to tell us nothing: But to derive two or three general Principles of Motion from Phaenomena, and afterwards to tell us how the Properties and Actions of all corporeal Things follow from those manifest Principles, would be a very great step in[Pg 402] Philosophy, though the Causes of those Principles were not yet discover'd: And therefore I scruple not to propose the Principles of Motion above-mention'd, they being of very general Extent, and leave their Causes to be found out. Now by the help of these Principles, all material Things seem to have been composed of the hard and solid Particles above-mention'd, variously associated in the first Creation by the Counsel of an intelligent Agent. For it became him who created them to set them in order. And if he did so, it's unphilosophical to seek for any other Origin of the World, or to pretend that it might arise out of a Chaos by the mere Laws of Nature; though being once form'd, it may continue by those Laws for many Ages [--> notice the formulation, here of a pivotal aspect of the current debates over design] . . . . As in Mathematicks, so in Natural Philosophy, the Investigation of difficult Things by the Method of Analysis, ought ever to precede the Method of Composition. This Analysis consists in making Experiments and Observations, and in drawing general Conclusions from them by Induction, and admitting of no Objections against the Conclusions, but such as are taken from Experiments, or other certain Truths. For [speculative] Hypotheses are not to be regarded in experimental Philosophy. And although the arguing from Experiments and Observations by Induction be no Demonstration of general Conclusions; yet it is the best way of arguing which the Nature of Things admits of, and may be looked upon as so much the stronger, by how much the Induction is more general. And if no Exception occur from Phaenomena, the Conclusion may be pronounced generally. But if at any time afterwards any Exception shall occur from Experiments, it may then begin to be pronounced with such Exceptions as occur. By this way of Analysis we may proceed from Compounds to Ingredients, and from Motions to the Forces producing them; and in general, from Effects to their Causes, and from particular Causes to more general ones, till the Argument end in the most general. This is the Method of Analysis: And the[Pg 405] Synthesis consists in assuming the Causes discover'd, and establish'd as Principles, and by them explaining the Phaenomena proceeding from them, and proving the Explanations. In the two first Books of these Opticks, I proceeded by this Analysis to discover and prove the original Differences of the Rays of Light in respect of Refrangibility, Reflexibility, and Colour, and their alternate Fits of easy Reflexion and easy Transmission, and the Properties of Bodies, both opake and pellucid, on which their Reflexions and Colours depend. And these Discoveries being proved, may be assumed in the Method of Composition for explaining the Phaenomena arising from them: An Instance of which Method I gave in the End of the first Book.
3 --> We see here Newton's view of the world, which is not merely a design view but a Bible-based Creationist view, though of course he had certain doctrinal peculiarities. In particular he sees the world as an intelligently designed coherent system that is governed under laws that sum up the usual course of events. Where also, such may be inferred from observed patterns and then used in confident logical-mathematical deductions, subject to empirical correction and correction in light of errors of reasoning. 4 --> In passing I must note that in the clip from you above, you spoke of observations made through a microscope, which is of course a key acknowledgement that we do make observations which are factual. 5 --> Similarly, a microscope is a real world object and instrument constructed on in effect [in the simple case] principles inferred from ray optics investigations, not an explanation. It is to this that we now turn, as Ray Optics is a good example of a limited -- it does not explain the underlying dynamics nor does it explain all phenomena -- but empirically credible and reliable scientific theory. 6 --> Start form a rectangular prism of glass or the like. Place it in a darkened room on a sheet of dark bristol board and pass a pencil of light through it from the side at various angles of incidence, so we can see the way the pencil of light that is visible through scattering from the board,is deflected within and as it passes back out of the prism. 7 --> A more detailed investigation with the usual pins will show Snell's law in action. The use of a ripple tank will suffice to show that an excellent way to account for this behaviour is on the varying wavelength of waves in diverse media, which naturally bends waves away from the normal when they speed up, and towards it as they slow down. Huygens' construction is helpful. (Newton's corpuscularianism failed him here, though of course it turns out that waves and particles at quantum level are joint properties under diverse circumstances.) 8 --> Now, try the same with a triangular prism, and observe the refraction and dispersal of light, which bespeaks the dispersive medium involved. The reassembling of white light in a second prism was one of Newton's triumphs. He demonstrated that white light is a separable mixture. 9 --> Now, use a comb of fine, parallel pencils of light, and a stack of prisms, in the general shape of a convex then a concave lens. This will show the principal focus and how lenses refract light, forming real and virtual images. 10 --> With some investigation, it will be apparent that as a pattern, thin lenses do not appreciably deviate light passing through the optical centre. A reflection on how the rectangular prism in net deviates a pencil parallel to itself will be enough to see that the thinner it is, the less the displacement. 11 --> Likewise, paraxial rays come together at the principal focus. 13 --> Thirdly, rays are reversible. That is for instance the that passes through the PF on its side of the lens will be refracted parallel to the axis of the lens. 14 --> Further investigations on object distance from the optical centre, u and image distance v, with focal length f, will substantiate: 1/f = 1/u + 1/v 15 --> We have just arrived at some pretty effective laws of ray optics for thin lenses that suffice to make very useful graphical constructions that can be used to analyse and even design optical instruments. This will allow us to see how real and virtual images are formed, and to see why they are upright or inverted etc, as well as to measure magnification or diminution of the size of an image relative to the object. 16 --> A useful and impressive further exercise is to set up a lens in a darkish room, along a scale and with a card behind it, opposite a window. (Edmund Scientific still has an excellent Optics Discovery Kit of the OSA.) Adjusting the location of the card will soon show a real, full-colour inverted moving image of the world beyond the window at the focal length of the lens. 17 --> A similar real, inverted image forms on the screen of a pinhole camera, or inside the Camera Obscura -- which seems to have played a significant but often overlooked role in the rise of realistic painting in the Renaissance era. 18 --> These illustrate how a real image is formed: light from a point at source is brought together at a corresponding point at image. (With a virtual image -- important for understanding the compound microscope -- light from a point on an object appears to diverge from a corresponding point on the image.) 19 --> A related experiment with a small plane mirror and pins will suffice to show that such a mirror forms a virtual half-universe behind it. 20 --> Similar exercises can be done with curved mirrors, which also form images and are non-dispersive. This BTW is how and why Newton invented the reflecting telescope, having despaired of getting rid of chromatic aberrations due to dispersion. 21 --> From this we can analyse the simple and compound microscopes and the astronomical and Galilean telescopes, as well as the basic reflector telescope and the camera. Prism binoculars as well, Porro and Roof-prism. More sophisticated work will require wave optics, but that does not make the above findings false, once it is appreciated that limited and admittedly approximate results and patterns, within their limits, are correct -- accurate to reality as we may experience and observe it.(BTW, at a more sophisticated level, the same holds for classical thermodynamics and Newtonian Dynamics in a Quantum-Relativity world.) 22 --> Where does this leave us? First, we see that we may indeed use abductive inference on sets of experiences to formulate explanatory models that are potential truth bearers and which may attract empirical support. Such models do gain credibility as empirically reliable based on being able to consistently accurately predict results. And, where limitations are found, acknowledging these retains the credibility of the limited theory as adjusted. 23 --> Moreover, Avi Sion's point on the need to keep a balanced view of inductive generalisations is still relevant:
We might . . . ask – can there be a world without any ‘uniformities’? A world of universal difference, with no two things the same in any respect whatever is unthinkable. Why? Because to so characterize the world would itself be an appeal to uniformity. A uniformly non-uniform world is a contradiction in terms. Therefore, we must admit some uniformity to exist in the world. The world need not be uniform throughout, for the principle of uniformity to apply. It suffices that some uniformity occurs. Given this degree of uniformity, however small, we logically can and must talk about generalization and particularization. There happens to be some ‘uniformities’; therefore, we have to take them into consideration in our construction of knowledge. The principle of uniformity is thus not a wacky notion, as Hume seems to imply . . . . The uniformity principle is not a generalization of generalization; it is not a statement guilty of circularity, as some critics contend. So what is it? Simply this: when we come upon some uniformity in our experience or thought, we may readily assume that uniformity to continue onward until and unless we find some evidence or reason that sets a limit to it. Why? Because in such case the assumption of uniformity already has a basis, whereas the contrary assumption of difference has not or not yet been found to have any. The generalization has some justification; whereas the particularization has none at all, it is an arbitrary assertion. It cannot be argued that we may equally assume the contrary assumption (i.e. the proposed particularization) on the basis that in past events of induction other contrary assumptions have turned out to be true (i.e. for which experiences or reasons have indeed been adduced) – for the simple reason that such a generalization from diverse past inductions is formally excluded by the fact that we know of many cases [[of inferred generalisations; try: "we can make mistakes in inductive generalisation . . . "] that have not been found worthy of particularization to date . . . . If we follow such sober inductive logic, devoid of irrational acts, we can be confident to have the best available conclusions in the present context of knowledge. We generalize when the facts allow it, and particularize when the facts necessitate it. We do not particularize out of context, or generalize against the evidence or when this would give rise to contradictions . . .[[Logical and Spiritual Reflections, BK I Hume's Problems with Induction, Ch 2 The principle of induction.]
24 --> So, it is reasonable to expect uniformities that are sufficiently intelligible and evident that we may make testable and potentially truth-bearing inductive generalisations, many of which are in no danger of being overthrown in their zones of broad confirmation. 25 --> And in the context of accurately reporting empirically reliable provisional patterns, such claims may reasonably claim to be just that: accurate to reality as experienced. Thus they can properly be termed knowledge in the weak form sense: well warranted, credibly true beliefs. 26 --> Some of these, are so strongly supported that we may be morally certain of them. That is, it would be irresponsible or downright foolish or destructive to act as though we could deem them false. 27 --> So, we see where we can properly speak of scientific knowledge, and of observations -- direct and instrumental -- that are reliably accurate and in effect are facts. Against which theories, models and hypotheses must be tested. (Where accuracy to reality is what is meant by truth. In Aristotle's terms: saying of what is that it is and of what is not that it is not.) 28 --> And in this sense we may properly speak of empirical, observational support for the empirical reliability and accuracy of certain theories. 29 --> And it is in that context that we may identify patterns of cause and effect, with key paradigm cases illustrating mechanical necessity leading to natural regularities, chance circumstances to stochastically distributed contingency, and intelligent and purposeful choice leading to highly contingent outcomes that show various empirically reliable signs of design. 30 --> Where two of these tested and observed reliable signs are: (i) functionally specific complex organisation and associated information, and(ii) functionally specific, irreducible complexity. Where also on needle in the haystack analyses of configuration spaces and constraints on blind chance and necessity sampling such, we are maximally unlikely to hit on islands of function by trial and error alone. (Intelligent designers use knowledge and insight to put us in the near vicinity of such and development testing -- which may use constrained trial and error and hill-climbing improvements and simulations etc -- then gets us on target.) 31 --> The problem with these being, not that they are unreliable, but that they cut across an entrenched evolutionary materialist school of thought. _________ So, we see the significance of Q1 in the syllabus of 18 Q's above in the OP. I trust this can now help us move on to Q's 2 - 4:
2: Is there such a thing as reasonable inductive generalisation that can identify reliable empirical signs of causal factors that may act on objects, systems, processes or phenomena etc., including (a) mechanical necessity leading to low contingency natural regularity, (b) chance contingency leading to stochastic distributions of outcomes and (c) choice contingency showing itself by certain commonly seen traces familiar from our routine experiences and observations of design? If not, why not? 3: Is it reasonable per sampling theory, that we should expect a chance based sample that stands to the population as one straw to a cubical hay bale 1,000 light years thick – rather roughly about as thick as our galaxy – more or less centred on Earth, to pick up anything but straw (the bulk of the population)? If you think so, why (in light of sampling theory – notice, NOT precise probability calculations)? [Cf. the underlying needle in a haystack discussion here on.] 4: Is it therefore reasonable to identify that functionally specific complex organisation and/or associated information (FSCO/I, the relevant part of Complex Specified Information as identified by Orgel and Wicken et al. and as later quantified by Dembski et al) is – on a broad observational base – a reliable sign of design? Why or why not?
KF PS: This survey on ray optics may be helpfulkairosfocus
September 2, 2012
September
09
Sep
2
02
2012
03:32 AM
3
03
32
AM
PDT
CR: I know that my name is what it is, who my wife and children are, and what I ate for breakfast this morning. I know that a dropped heavy object near earth will fall at 9.8 N/kg. Similarly, that the Sun is a G2 main sequence star with blackbody surface temp about 5700 K, and absolute -- 10 Parsecs -- magnitude 4.8 or so (just under -- brighter than -- 5). That its spectrum is characterised by Fraunhoffer lines which indicate high metallicity, and much more. I know that on the dominant model of stellar origins, that suggests that it is at least a second generation star, and that its age is conventionally estimated at about 4 - 5 BY on the same models. Also, that our galaxy's centre lies in the constellation Sagittarius, and that we are out "on" a spiral arm. Also that when I was younger, we discussed our Galaxy as a spiral one, now as a barred spiral. I know that the water molecule is H2O, in chemical composition. I know that scientific theories are provisional explanatory constructs that may need to be revised, but in some cases may well turn out to be true. I know -- following Josiah Royce etc. -- that error exists and that it is undeniably thus self evidently true. Similarly, and as I added to the IOSE intro page this morning, I know that the inductive generalisation that we sometimes err in such, is an example of an inductive generalisation that is in no danger of being overturned. And so forth. In these and many other cases, I have good warrant for the claims, in various forms, and these are credibly true, and I believe them. this is what I mean by claiming to know these things. Other things, I may suspect or even doubt. They may be possibly true, but I have no good warrant that gives me a basis for holding them credibly true. Please think again. KFkairosfocus
September 1, 2012
September
09
Sep
1
01
2012
11:21 AM
11
11
21
AM
PDT
CR: Knowledge is a high-level explanation for phenomena. KF: Nope, knowledge is well warranted, credibly true belief. From the same entry on Wikipedia...
Critical rationalism rejects the classical position that knowledge is justified true belief; it instead holds the exact opposite: That, in general, knowledge is unjustified untrue unbelief. It is unjustified because of the non-existence of good reasons. It is untrue, because it usually contains errors that sometimes remain unnoticed for hundreds of years. And it is not belief either, because scientific knowledge, or the knowledge needed to build a plane, is contained in no single person's mind. It is only available as the content of books.
But this does't mean we must be relativists, as indicated above.critical rationalist
September 1, 2012
September
09
Sep
1
01
2012
09:41 AM
9
09
41
AM
PDT
I re-read my comment, forgetting I had already posted it, and revisited in an attempt to clarify, as there are many misconceptions about Popper. (I don't expect you to respond to the second post unless I point out exactly where they differ.) For example if one thinks they are a Popperan but assume Popper thought observations could make theories more probably then they are confused about a key aspect of Popper's epistemology. The difference is subtle, but critical. It can be difficult to grasp because it doesn't match our intuition about how we operate. But Popper points out that when we actually criticize it, we do not use induction. For example, I'm *not* a hyper-skepticist as you're implying. Nor do I think that bridges build by non-Popperans need to be rebuilt. So, it's not that I object to induction, but that induction is impossible and that this same knowledge is created via some other means other than justification. From the Critical Rationalism entry on Wikipedia
William Warren Bartley compared critical rationalism to the very general philosophical approach to knowledge which he called "justificationism". Most justificationists do not know that they are justificationists. Justificationism is what Popper called a "subjectivist" view of truth, in which the question of whether some statement is true, is confused with the question of whether it can be justified (established, proven, verified, warranted, made well-founded, made reliable, grounded, supported, legitimated, based on evidence) in some way. According to Bartley, some justificationists are positive about this mistake. They are naïve rationalists, and thinking that their knowledge can indeed be founded, in principle, it may be deemed certain to some degree, and rational. Other justificationists are negative about these mistakes. They are epistemological relativists, and think (rightly, according to the critical rationalist) that you cannot find knowledge, that there is no source of epistemological absolutism. But they conclude (wrongly, according to the critical rationalist) that there is therefore no rationality, and no objective distinction to be made between the true and the false. By dissolving justificationism itself, the critical rationalist regards knowledge and rationality, reason and science, as neither foundational nor infallible, but nevertheless does not think we must therefore all be relativists. Knowledge and truth still exist, just not in the way we thought.
(emphasis mine) Again, we should accept observations from microscopes because they represent good explanations for those observations. Good explanations are long chains of independent, hard to vary explanations. This is not the same as induction. I'm not a hyper-skepticist regarding microscopes.critical rationalist
September 1, 2012
September
09
Sep
1
01
2012
09:36 AM
9
09
36
AM
PDT
CR: Just one quick point:
Knowledge is a high-level explanation for phenomena.
Nope, knowledge is well warranted, credibly true belief. KFkairosfocus
September 1, 2012
September
09
Sep
1
01
2012
09:13 AM
9
09
13
AM
PDT
The above comment was posted to the wrong thread. It's a reply to what gpuccio wrote here...critical rationalist
September 1, 2012
September
09
Sep
1
01
2012
08:24 AM
8
08
24
AM
PDT
gpuccio: Well, now I understand better you terminology. So, you call “non-explanatory knowledge” what I (and others) would call “unguided generation of useful information in a system by random variation”. You're overlooking a key point: non-explanatory knowledge has significantly less reach that explanatory knowledge. If I lack an explanation as to why the coconut is opened when it accidentally fell on the rock, then its usefulness is limited to just that scenario. To open other coconuts from other trees without rocks beneath them, I'd cary them up a tree that did, then drop them. It's a useful rule of thumb. However, there are always explanations for non-explanatory knowledge, even when they are not explicit. In the case of the coconut, these explanations include mass, inertia, etc., and has significantly more reach. Rather than dropping the coconuts from the tree to land on a rock, I can stay on the ground and strike any coconut with any rock. And I can substitute rocks and coconuts with other objects, such as anchors and shells, etc. This is significantly greater reach. This is a significant distinction which represents progress in our ability to, well, make progress as people. It also explains our recent, rapid increase in our ability to make progress. gpuccio: I wonder: do you believe that “non-explanatory knowledge” can explain (in the scientific sense) a system where digital coded, complex information is first stored, and then translated, by two completely different sets of code aware, complex procedures, like it happens in DNA protein genes? Just to know… I'm still not sure exactly what your asking or how it's relevant. Translation mechanisms perform transformations of matter. Transformations occur with the necessary knowledge is present. In addition, cells build themselves based on the knowledge found in their genome. These transformations occur when the requisite knowledge is present as well. So, in both cases, the question is, "how was this knowledge created?" Knowledge is a high-level explanation for phenomena. For example if someone is defeated by a chess program, we do not say they were defected by electrons or sand. Yet, you seem to be asking me how electrons or sand can defeat a chess player. Are you suggesting that science should always explain everything reductively?critical rationalist
September 1, 2012
September
09
Sep
1
01
2012
08:17 AM
8
08
17
AM
PDT
CR: We cannot observe causes. As such, all we can do is criticize theories with the intent of finding and correcting errors. KF: First, it is not true that we cannot observe causal factors at work…. These are not equivalent statements as I said we cannot observe *causes*. Furthermore, observations are based on hard to vary explanations for how we acquired them. So, we cannot positively support any particular theory or conclude it's more probable via observations. We're left with rational criticism. KF: … or trace them from their characteristic outcomes. That is how scientific laws are established after all. No, it's not. By using the word "trace" you appear to suggesting we can mechanically extrapolate theories from observations. But this isn't possible as we get more out of a theory than it's observations. This simply doesn't add up. KF: Similarly, it is not an unfair challenge to demand that a claimed causal factor held to be sufficient to cause an effect be demonstrated as actually doing so in our observation. That boils down to that in science — and common-sense day to day life — claims are subject to empirical observational tests. Science is not primarily about "stuff you can see" as we use the unseen to explain the seen. Are you suggesting we can directly observe unseen things? How does that work, in detail? Or perhaps you're suggesting we have some other infallible source regarding unseen things? Again, induction and criticism are not the same thing. Observations cannot not positively support a theory. As Popper pointed out, we solve the problem of induction by rational criticism. Furthermore, saying evolution is merely chance and necessity is like saying someone defeated by a chess program was defeated by electrons. While this is also true, you are appealing to a specific level of reductionism. Evolutionary processes create the knowledge of how to build adaptations, which is non-explanatory in nature. And I mean genuinely create knowledge, rather than having already existed in some form. Specifically, conjecture, in the form of genetic variation random to any specific problem to be solved, and refutation, in the form of natural selection. The result in non-explanatory knowledge. Does your account suggest this new knowledge existed at the outset? If so, it's creationism. Does your account suggest this knowledge "just appeared"? If so, it represents spontaneous generation, as found in aspects of Lamarckism. Is an account for this knowledge absent? if so, it's a bad explanation because it actually fails to solve the problem at hand. What is ID's account for how this knowledge was created? I'd agree that only people can create explanatory knowledge. I'd also agree that there are explanations for useful non-explanatory knowledge, even if it isn't explicitly presented. So, as people, we can be cognitive of explanations for non-explanatory knowledge whenever we discuss it. This, however, doesn't mean that knowledge of how to build organisms, which is found in the genome in a non-explanatory form, cannot be created in the absence of people. KF: A classic is how Newton inferred to the Universal law of gravitation, cf here. Another, is how Einstein inferred on the threshold effect with the photo electric effect, to the reality of photons and the threshold equation that is in large part responsible for his Nobel Prize. That's the myth that Popper was referring to. Inference is defined as "a conclusion is inferred from multiple observations". This implies observations can make a theory more probable via observations. But it cannot. Again, you've got it backwards. To quote from an essay Einstein wrote in late 1919….
A theory can thus be recognized as erroneous [unrichtig] if there is a logical error in its deductions, or as incorrect [unzutreffend] if a fact is not in agreement with its consequences. But the truth of a theory can never be proven. For one never knows that even in the future no experience will be encountered which contradicts its consequences; and still other systems of thought are always conceivable which are capable of joining together the same given facts.
IOW, there are an infinite number of yet to be conceived explanations which are also compatable with the same observations. We cannot factor these un-conceived explanations into a calculus of probability, which makes it invalid as a means of deeming a theory more probable. It's simply not applicable in the sense you're implying. However, I'm a critical rationalist. As such, I'm open to you formulating a "principle of indiction" that actually works *in practice*. However, no one has as of yet. In doing so, I recognize that my conception of human knowledge is an idea that is subject to criticism. Do you? KF: Now, obviously, scientific knowledge is provisional in key respects. That’s fine, warrant — notice the distinction in terminology — comes in degrees, as has been known for millennia. Obviously? What about the empiricists, logical positivists and the like? Was it "obvious" to them? If you think it's obvious that knowledge must be justified by some ultimate source or arbiter, then it would come as no surprise that you think Darwinism cannot create the non-explanatory knowledge of how to build adaptations. So, your argument is parochial in nature as It appears that you cannot recognize your conception of human knowledge as an idea that itself would be subject to criticism. KF: Where there is sufficient warrant that something is a best explanation and is empirically reliable, it is reasonable to use it in responsible contexts. In some cases, one would be irresponsible to ignore its force. Which is where I started out. Epistemology is an explanation about how knowledge is created. Whether “design” is the best explanation is based on implicit assumptions about knowledge, such as if it is complex, whether it is genuinely created, etc. The best explanation doesn't refer to a theory proven more likely by observations (which isn't possible), it means an explanation that has withstood the most rational criticism. A theory that doesn't stick it neck out, such as one based on an abstract designer that has no defined limitations, is a bad explanation because it cannot be significantly criticized. Why don't you start out by explanaing how knowledge is created, then point out how evolutionary processes do not fit that explanation. Please be specific. ________ CR: Much of the above is a simple repetition of what was already answered as though nothing of significance was said in response above. It does not appear that there is a dialogue at this point. I suggest you need to think about the nature of induction, and about its role in science. It may help for you to do a review of the rise of astrophysics, geology and even evolutionary biology and particle physics. ALL of these deal with issues of the unobservable, but warranted, and I have to point out that it is silly to say that we are not observing mechanical necessity as a causal factor when we reliably see a heavy object dropped and falling at 9.8 N/kg, and then if it is a fair die, tumbling and setting to a value across the set {1, 2, . . . 6} in accordance with a random distribution. Also, if the die is loaded, to see it settling reliably to say a 6 tells us something was done to it deliberately. I think it is fair to say that those count as observations of causal factors in action. And it is fair to say that seeing a die fall, tumble and come to a value is a fact of observation that is independent of and can be used to check any particular relevant theory. BTW, have you ever done the stack of prisms experiment to see how light is bent, and thence how a lens works? Mirrors? To then dismiss observations made using lenses and mirrors -- as opposed to say a computer simulation of what could be seen in such an instrument -- as though they are so embedded with debatable theoretical commitments that they do not count as observations of reality, is itself what is dubious. And so forth. KFcritical rationalist
September 1, 2012
September
09
Sep
1
01
2012
08:16 AM
8
08
16
AM
PDT
PS: CR, did you notice the links I gave in the OP above on my own and others' answers to the syllabus of eighteen key questions? Why not start here. When you ignore this and suggest that I have not done substantially what you demand, that does not commend itself to me. KFkairosfocus
September 1, 2012
September
09
Sep
1
01
2012
04:08 AM
4
04
08
AM
PDT
F/N: it is worth scooping this out from Locke:
It will be no excuse to an idle and untoward servant [Matt 24:42 - 51], who would not attend his business by candle light, to plead that he had not broad sunshine. The Candle that is set up in us [Prov 20:27] shines bright enough for all our purposes . . . If we will disbelieve everything, because we cannot certainly know all things, we shall do muchwhat as wisely as he who would not use his legs, but sit still and perish, because he had no wings to fly. [Essay on Human Understanding, Intro, Sect 5, parenthesis and emphasis added.]
Notice, the implication, that much of what we accept and live by as knowledge is reliable enough to live by but not certain beyond all doubts or correction. In short, we must live -- including in the basic biological sense -- by faith. So, our real goal should be, not certainty beyond all doubt [post Godel, not even Mathematics can meet that demand], but reasonable, well warranted belief that we may live by, here and hereafter. KFkairosfocus
September 1, 2012
September
09
Sep
1
01
2012
02:22 AM
2
02
22
AM
PDT
1 2 3 4

Leave a Reply