Uncommon Descent Serving The Intelligent Design Community

For record: Questions on the logical and scientific status of design theory for objectors (and supporters)

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Over the past several days, I have been highlighting poster children of illogic and want of civility that are too often found among critics to design theory – even, among those claiming to be standing on civility and to be posing unanswerable questions, challenges or counter-claims to design theory.

I have also noticed the strong (but patently ill-founded) feeling/assumption among objectors to design theory that they have adequately disposed of the issues it raises and are posing unanswerable challenges in exchanges

A capital example of this, was the suggestion by ID objector Toronto, that the inference to best current explanation used by design thinkers, is an example of question-begging circular argument. Here, again is his attempted rebuttal:

Kairosfocus [Cf. original Post, here]: “You are refusing to address the foundational issue of how we can reasonably infer about the past we cannot observe, by working back from what causes the sort of signs that we can observe. “

[Toronto:] Here’s KF with his own version of “A concludes B” THEREFORE “B concludes A”.

(Yes, as the links show, this is a real example of the type of “unanswerable” objections being touted by opponents of design theory. Several more like this are to be found here and here, in the recent poster-child series.)

But, it should be obvious that the abductive argument pioneered in science by Peirce addresses the question of how empirical evidence can support a hypothesis or explanatory model (EM) as a “best explanation” on an essentially inductive basis, where the model is shown to imply the already known observations, O1 . . . On, and may often be able to predict further observations P1 . . . Pn:

EM = > {O1, O2, . . . On}, {P1, P2, . . . Pm}

Now, the first problem here is that there is a counterflow between the direction of logical implication, from EM to O’s and P’s, and that of empirical support, from O’s and P’s to EM. It would indeed be question-begging to infer from the fact that EM – if true – would indeed entail the O’s and P’s, plus the observation of these O’s and P’s, that EM is true.

But, guess what: this is a general challenge faced by all explanatory models or theories in science

For, in general, to infer that “explained” O’s and P’s entail the truth of EM, would be to commit a fallacy, affirming the consequent; essentially, confusing that EM being so is sufficient for O’s and P’s to be so, with that the O’s and P’s also therefore entail that EM is so.

That is, implication is not equivalence.

(One rather suspects that, Toronto was previously unaware of this broad challenge to scientific reasoning. [That would be overwhelmingly likely, as the logical strengths and limitations of the methods and knowledge claims of science are seldom adequately taught in schools and colleges . . . and to call for this – as has happened in Louisiana etc, is too often treated by advocates of evolutionary materialism with talking points that this is “obviously” an attempt to inject the Creationism bogeyman into the hallowed halls of “science” education.] So, quite likely, Toronto has seen the problem for the first time in connexion with attempts to find objections to design theory and has assumed that this is a problem that is a peculiar challenge to that suspect notion. But, plainly, it is not.)

The answer to this challenge, from Newton forward, has been to acknowledge that scientific theories are to be empirically tested and shown to be reliable so far, but are subject to correction in light of new empirical evidence and/or gaps in logic. Provisional knowledge, in short. Yet another case where the life of reason must acknowledge that trust – the less politically correct but apt word is: faith – is an inextricable, deeply intertwined component of our systems of knowledge and our underlying worldviews.

But, a second challenge emerges.

For, explanatory models are often not unique. We may well have EM1, EM2, . . . EMk, which may actually be empirically equivalent, or may all face anomalies that none are able to explain so far. So, how does one pick a best model, EMi, without begging big questions?

It is simple to state – but far harder to practice: once one seriously compares uncensored major alternative explanatory models on strengths, limitations and difficulties regarding factual adequacy, coherence and explanatory power, and draws conclusions on a provisional basis, this reasonably warrants the best of the candidates. That is, if there is a best candidate. (Sometimes, there is not. In that case, we live with alternatives, and in a surprising number of cases, it has turned out on further probing that the models are mathematically equivalent or are linked to a common underlying framework, or are connected to underlying worldview perspectives in ways that do not offer an easy choice.)

Such an approach is well within the region of inductive reasoning, where empirical evidence provides material support for confidence in – but not undeniable proof of – conclusions. Where, these limitations of inductive argument are the well known, common lot we face as finite, fallible, morally struggling, too often gullible, and sometimes angry and ill-willed human beings.

When it comes to explanatory models of the deep past of origins, we face a further challenge.

For, we cannot inspect the actual deep past, it is unobservable. (There is a surprisingly large number of unobserved entities in science, e.g. electrons, strings, the remote past and so forth. These, in the end are held on an inference to best explanation basis in light of connexions to things we can and do observe. That is, they offer elegantly simple unifying explanatory integration and coherence to our theories. But, we must never become so enamoured of these constructs that we confuse them for established fact beyond doubt or dispute. Indeed, we can be mistaken about even directly observable facts. [Looks like that just happened to me with the identity of a poster of one comment a few days ago, apologies again for the misidentification.])

So, applying Newton’s universality principle, what we do is to observe the evident traces of the remote past. We then set up and explore circumstances in the present, were we can see if there are known causal factors that reliably lead to characteristic effects that are directly comparable to the traces of the past. When that is so, we have a basis for inferring that we can treat the traces from the past as signs that the same causal factor is the best explanation.

Put in such terms, this is obviously reasonable.

Two problems crop up. First, too often, in origins science, there is a resort to a favoured explanation despite the evident unreliability of the signs involved, because of the dominance of a school of thought that suppresses serious alternatives. Second, there can be signs that are empirically reliable that cut across the claims of a dominant school of thought. Design theorists argue that both of these have happened with the currently dominant evolutionary materialist school of thought. Philip Johnson’s reply to Richard Lewontin on a priori materialism in science is a classic case in point – one that is often dismissed but (kindly note, Seversky et al) has never been cogently answered:

For scientific materialists the materialism comes first; the science comes thereafter. [[Emphasis original] We might more accurately term them “materialists employing science.” And if materialism is true, then some materialistic theory of evolution has to be true simply as a matter of logical deduction, regardless of the evidence. That theory will necessarily be at least roughly like neo-Darwinism, in that it will have to involve some combination of random changes and law-like processes capable of producing complicated organisms that (in Dawkins’ words) “give the appearance of having been designed for a purpose.”  

. . . .   The debate about creation and evolution is not deadlocked . . . Biblical literalism is not the issue. The issue is whether materialism and rationality are the same thing. Darwinism is based on an a priori commitment to materialism, not on a philosophically neutral assessment of the evidence. Separate the philosophy from the science, and the proud tower collapses. [[Emphasis added.] [[The Unraveling of Scientific Materialism, First Things, 77 (Nov. 1997), pp. 22 – 25.]

This example (and the many others of like ilk) should suffice to show that the objectors to design theory do not have a monopoly on scientific or logical knowledge and rationality, and that they can and do often severely and mistakenly caricature the thought of design thinkers to the point of making outright blunders. Worse, they then too often take excuse of that to resort to ad hominem attacks that cloud issues and unjustly smear people, polarising and poisoning the atmosphere for discussion. For instance, it escapes me how some could ever have imagined – or imagined that others would take such a claim as truthful – that it is a “lighthearted” dig to suggest that I would post links to pornography.

Such a suggestion is an insult, one added to the injury of red herrings led away to strawmannish caricatures and dismissals.

In short, there is a significant problem among objectors to design theory that they resort to a habitual pattern of red herring distractors, led away to strawman caricatures soaked in poisonous ad hominem attacks, and then set alight through snide or incendiary rhetoric. Others who do not go that far, enable, tolerate or harbour such mischief. And, at minimum, even if there is not a resort to outright ad hominems, there is a persistent insistence on running after red herrings on tangents to strawman caricatures, and a refusal to accept cogent corrections of such misrepresentations.

That may be angrily brushed aside.

So, I point out that the further set of problems with basic logic, strawman caricatures and personal attacks outlined in the follow up post here. Let us pick up a particular example of the evasiveness and denial of well-established points, also from Toronto:

Physics both restricts and insists on different combinations of “information”.

Why is the word information in scare quotes?

Because, believe it or not, as has been repeatedly seen at UD, many objectors to design theory want to contend that there is no algorithmic, digitally – i.e. discrete-state — coded specifically functional (and complex) information in D/RNA. In reply, I again clip from Wikipedia, speaking against known ideological interest:

The genetic code is the set of rules by which information encoded in genetic material (DNA or mRNA sequences) is translated into proteins (amino acid sequences) by living cells.

The code defines how sequences of three nucleotides, called codons, specify which amino acid will be added next during protein synthesis.

Hopefully, these examples should suffice to begin to clear the air for a serious focus on substantial issues.

So, I think the atmosphere has now – for the moment – been sufficiently cleared of the confusing and polarising smoke of burning, ad hominem soaked strawman caricatures of design theory to pose the following questions. They are taken from a response to recent comments (with slight adjustments), and were – unsurprisingly, on track record – ignored by the objector to whom they were directed.

I now promote them to the level of a full, duly headlined UD post:

1: Is argument by inference to best current explanation a form of the fallacy of question-begging (as was recently asserted by design objector “Toronto”)? If you think so, why?

2: Is there such a thing as reasonable inductive generalisation that can identify reliable empirical signs of causal factors that may act on objects, systems, processes or phenomena etc., including (a) mechanical necessity leading to low contingency natural regularity, (b) chance contingency leading to stochastic distributions of outcomes and (c) choice contingency showing itself by certain commonly seen traces familiar from our routine experiences and observations of design? If not, why not?

3: Is it reasonable per sampling theory, that we should expect a chance based sample that stands to the population as one straw to a cubical hay bale 1,000 light years thick – rather roughly about as thick as our galaxy – more or less centred on Earth, to pick up anything but straw (the bulk of the population)? If you think so, why (in light of sampling theory – notice, NOT precise probability calculations)? [Cf. the underlying needle in a haystack discussion here on.]

4: Is it therefore reasonable to identify that functionally specific complex organisation and/or associated information (FSCO/I, the relevant part of Complex Specified Information as identified by Orgel and Wicken et al. and as later quantified by Dembski et al) is – on a broad observational base – a reliable sign of design? Why or why not?

5: Is it reasonable to compare this general analysis to the grounding of the statistical form of the second law of thermodynamics, i.e. that under relevant conditions, spontaneous large fluctuations from the typical range of the bulk of [microstate] possibilities will be vanishingly rare for reasonably sized systems? If you think not, why not?

6: Is digital symbolic code found to be stored in the string-structure configuration of chained monomers in D/RNA molecules, and does such function in algorithmic ways in protein manufacture in the living cell? If, you think not, why not in light of the generally known scientific findings on transcription, translation and protein synthesis?

7: Is it reasonable to describe such stored sequences of codons as “information” in the relevant sense? Why or why not?

8: Is the metric, Chi_500 = Ip*S – 500, bits beyond the solar system threshold and/or the comparable per aspect design inference filter as may be seen in flowcharts, a reasonable quantification or procedural application of the set of claims made by design thinkers? Or, any other related or similar metric, as has been posed by Durston et al, or Dembski, etc? Why, or why not – especially in light of modelling theory?

9: Is it reasonable to infer on this case that the origin of cell based life required the production of digitally coded FSCI — dFSCI — in string data structures, together with associated molecular processing machinery [cf. the vid here], joined to gated encapsulation, metabolism and a von Neumann kinematic self replicator [vNSR]? Why or why not?

10: Is it reasonable to infer that such a vNSR is an irreducibly complex entity and that it is required before there can be reproduction of the relevant encapsulated, gated, metabolising cell based life to allow for natural selection across competing sub populations in ecological niches? Why or why not? (And, if you think not, what is your empirical, observational basis for thinking that available physical/chemical forces and processes in a warm little pond or the modern equivalent, can get us, step by step, by empirically warranted stages, to the living cell?)

11: Is it therefore a reasonable view to infer – on FSCO/I, dFSCI and irreducible complexity as well as the known cause of algorithms, codes, symbol systems and execution machinery properly organised to effect such – that the original cell based life is on inference to best current explanation [IBCE], credibly designed? Why, or why not?

12: Further, as the increments of dFSCI to create dozens of major body plans is credibly 10 – 100+ mn bits each, dozens of times over across the past 600 MY or so, and much of it on the conventional timeline is in a 5 – 10 MY window on earth in the Cambrian era, is it reasonable to infer further on IBCE that major body plans show credible evidence of design? If not, why not, on what empirically, observationally warranted step by step grounds?

13: Is it fair or not fair to suggest that on what we have already done with digital technology and what we have done with molecular nanotech applied to the cell, it is credible that a molecular nanotech lab several generations beyond Venter etc would be a reasonable sufficient cause for what we see? If not, why not? [In short, the issue is: is inference to intelligent design specifically an inference to “supernatural” design? In this context, what does “supernatural” mean? “Natural”? Why do you offer these definitions and why should we accept them?]

14: Is or is it not reasonable to note that in contrast to the tendency to accuse design thinkers of being creationists in cheap tuxedos who want to inject “the supernatural” into science and so to produce a chaotic unpredictability:

a: From Plato in The Laws Bk X on, the issue has been explanation by nature (= chance + necessity) vs ART or techne, i.e. purposeful and skilled intelligence acting by design,

b: Historically, modern science was largely founded by people thinking in a theistic frame of thought and/or closely allied views, and who conceived of themselves as thinking God’s creative and sustaining thoughts — his laws of governing nature — after him,

c: Theologians point out that the orderliness of God and our moral accountability imply an orderly and predictable world as the overwhelming pattern of events,

d: Where also, the openness to Divine action beyond the usual course of nature for good purposes, implies that miracles are signs and as such need to stand out against the backdrop of such an orderly cosmos? [If you think not, why not?]

15: In light of all these and more, is the concept that we may legitimately, scientifically infer to design on inductively grounded signs such as FSCO/I a reasonable and scientific endeavour? Why or why not?

16: In that same light, is it the case that such a design theory proposal has been disestablished by actual observations contrary to its pivotal inductions and inferences to best explanations? (Or, has the debate mostly pivoted on latter-day attempted redefinition of science and its methods though so-called methodological naturalism that a priori undercuts the credibility of “undesirable” explanatory models of the past?) Why do you come to your conclusion?

17: Is it fair to hold – on grounds that inference to the best evolutionary materialism approved explanation of the past is not the same as inference to the best explanation of the past in light of all reasonably possible causal factors that could have been at work – that there is a problem of evolutionary materialist ideological dominance of relevant science, science education, and public policy institutions? Why or why not?

The final question for reflection raises issues regarding the ethical-cultural implications for views on the above for origins science in society:

18: In light of concerns raised since Plato in The Laws Bk X on and up to the significance of challenge posed by Anscombe and others, that a worldview must have a foundational IS that can objectively ground OUGHT, how does evolutionary materialism – a descriptive term for the materialistic, blind- chance- and- necessity- driven- molecules- to- Mozart view of the world – cogently address morality in society and resolve the challenge that it opens the door to the rise of ruthless nihilistic factions whose view is in effect that as a consequence of living in a materialistic world, knowledge and values are inherently only subjective and/or relative so that might and manipulation make ‘right’?

 (NB: Those wishing to see how a design theory based view of origins would address these and related questions, (i) cf. the 101 level survey here on. Similarly, (ii) you are invited to look at the UD Weak argument correctives here, (iii) at the UD Glossary here, at UD’s definition of ID here, (iv) at a general purpose ID FAQ here, (v) at the NWE survey article on ID here (the Wikipedia one being an inaccurate and unfair hit piece) and (vi) at the background note here on.)

So, objectors to design theory, the ball is in your court. (NB: Inputs are also welcome from design theory supporters.)

How, then, do you answer? On what grounds? With what likely consequences for science, society, and civilisation? END

Comments
Onlookers, see the importance of Q1 above? Confusion about inductive reasoning is a root challenge in the debates over design theory. KFkairosfocus
September 1, 2012
September
09
Sep
1
01
2012
02:10 AM
2
02
10
AM
PDT
CR: Let me note on some points in your 72: 1: cannot observe *causes*. Furthermore, observations are based on hard to vary explanations for how we acquired them. So, we cannot positively support any particular theory or conclude it’s more probable via observations. You will notice that I have pointed out that science focuses on causal factors that are detectable in empirical situations. Similarly I already showed why the objection that observations are theory laden is hyperskeptical, with the school men who refused to look through Galileo's telescope as capital example. Inductive reasoning does not provide demonstration, but does provide reasonable and often adequate support. And Hume's objection spectacularly fails, as he himself could not live but by trusting patterns delivered by experience, patterns which he had good reason to see were empirically reliable, starting from the desirability and nutritiousness of given food and drink or the effectiveness of certain clothing against the cold. Had he ignored such, he would have starved and frozen. So, frankly, his life refutes his objections. (I doubt that you are prepared to hear this sort of thing simply because it is said a second time around, but I speak for record.) Avi Sion is balancing, let me clip again on the principle of universality:
this “principle” may only be regarded as a heuristic idea, a rule of thumb, a broad but vague practical guideline to reasoning . . . . We might also ask – can there be a world without any ‘uniformities’? A world of universal difference, with no two things the same in any respect whatever is unthinkable. Why? Because to so characterize the world would itself be an appeal to uniformity. A uniformly non-uniform world is a contradiction in terms. Therefore, we must admit some uniformity to exist in the world. The world need not be uniform throughout, for the principle of uniformity to apply. It suffices that some uniformity occurs. Given this degree of uniformity, however small, we logically can and must talk about generalization and particularization. There happens to be some ‘uniformities’; therefore, we have to take them into consideration in our construction of knowledge. The principle of uniformity is thus not a wacky notion, as Hume seems to imply . . . . The uniformity principle is not a generalization of generalization; it is not a statement guilty of circularity, as some critics contend. So what is it? Simply this: when we come upon some uniformity in our experience or thought, we may readily assume that uniformity to continue onward until and unless we find some evidence or reason that sets a limit to it. Why? Because in such case the assumption of uniformity already has a basis, whereas the contrary assumption of difference has not or not yet been found to have any. The generalization has some justification; whereas the particularization has none at all, it is an arbitrary assertion. It cannot be argued that we may equally assume the contrary assumption (i.e. the proposed particularization) on the basis that in past events of induction other contrary assumptions have turned out to be true (i.e. for which experiences or reasons have indeed been adduced) – for the simple reason that such a generalization from diverse past inductions is formally excluded by the fact that we know of many cases that have not been found worthy of particularization to date. That is to say, if we have looked for something and not found it, it seems more reasonable to assume that it does not exist than to assume that it does nevertheless exist. Admittedly, in many cases, the facts later belie such assumption of continuity; but these cases are relatively few in comparison. The probability is on the side of caution. In any event, such caution is not inflexible, since we do say “until and unless” some evidence or argument to the contrary is adduced. This cautious phrase “until and unless” is of course essential to understanding induction. It means: until if ever – i.e. it does not imply that the contrary will necessarily occur, and it does not exclude that it may well eventually occur. It is an expression of open-mindedness, of wholesome receptiveness in the face of reality, of ever readiness to dynamically adapt one’s belief to facts. In this way, our beliefs may at all times be said to be as close to the facts as we can get them. If we follow such sober inductive logic, devoid of irrational acts, we can be confident to have the best available conclusions in the present context of knowledge. We generalize when the facts allow it, and particularize when the facts necessitate it. We do not particularize out of context, or generalize against the evidence or when this would give rise to contradictions . . .
2: By using the word “trace” you appear to suggesting we can mechanically extrapolate theories from observations. But this isn’t possible as we get more out of a theory than its observations. Strawman, as I never suggested any simple algorithm, but have consistently discussed inference to best current explanation in a responsible context. And that explanations are not mere summaries of observations does not undermine that (a) they may be empirically reliable, (b) they may be candidates to be true. 3: Science is not primarily about “stuff you can see” as we use the unseen to explain the seen. Are you suggesting we can directly observe unseen things? How does that work, in detail? Or perhaps you’re suggesting we have some other infallible source regarding unseen things? Strawman again. As was pointed out, when we look at situations we did not see, we are looking at the results of what happened in light of consequences and traces of causal factors at work. We can compare cases in the present or immediate vicinity, where we can test for causes and effects that are reliable and observable, serving as signs. Then it is reasonable and commonplace to infer that the best explanation for what was not directly observed or observable but which has left traces, is the same sort of factor. Let me use the familiar example of a checque. When the bank honours a checque deposited, it is because it trusts that the same sign comes from the same cause, or at any rate is best explained by it. Even, where fraud is possible. That is reasonable, responsible, and an everyday part of life. There is no good reason to then twist about and say, no if you cannot deliver absolute proof we can dismiss that which infers on the like logic of signs in science. For instance, no one has directly observed electrons, but from a pattern of effects [ a good example is Milikan's oil-drop experiment, which BTW is very hard to do practically], we have come to the conclusion -- to moral certainty -- that a common particle with a certain charge/mass ratio and a certain charge, is best explanation for a wide range of phenomena. Also, that as with other particles, it has wave properties. Likewise, no one has inspected distant stars directly. But again, traces we see and direct comparisons with spectra etc lead us to make inferences about chemical composition, temperature, even apparent formation, age, life stage etc. So, do we abandon this because we may not prove beyond dispute? Locke's reply is biting (and goes to another explaining concept that many in our day are so quick to deride, but should rethink their views), in his opening introductory remarks in his essay on human understanding, section 5. I cite this because it is apt and anticipated Hume by decades in a work he should have taken more seriously:
Men have reason to be well satisfied with what God hath thought fit for them, since he hath given them (as St. Peter says [NB: i.e. 2 Pet 1:2 - 4]) pana pros zoen kaieusebeian, whatsoever is necessary for the conveniences of life and information of virtue; and has put within the reach of their discovery, the comfortable provision for this life, and the way that leads to a better. How short soever their knowledge may come of an universal or perfect comprehension of whatsoever is, it yet secures their great concernments [Prov 1: 1 - 7], that they have light enough to lead them to the knowledge of their Maker, and the sight of their own duties [cf Rom 1 - 2 & 13, Ac 17, Jn 3:19 - 21, Eph 4:17 - 24, Isaiah 5:18 & 20 - 21, Jer. 2:13, Titus 2:11 - 14 etc, etc]. Men may find matter sufficient to busy their heads, and employ their hands with variety, delight, and satisfaction, if they will not boldly quarrel with their own constitution, and throw away the blessings their hands are filled with, because they are not big enough to grasp everything . . . It will be no excuse to an idle and untoward servant [Matt 24:42 - 51], who would not attend his business by candle light, to plead that he had not broad sunshine. The Candle that is set up in us [Prov 20:27] shines bright enough for all our purposes . . . If we will disbelieve everything, because we cannot certainly know all things, we shall do muchwhat as wisely as he who would not use his legs, but sit still and perish, because he had no wings to fly. [Text references added to document the sources of Locke's allusions and citations.]
When one cannot live consistent with a view, s/he needs to rethink it. 3: induction and criticism are not the same thing. Observations cannot not positively support a theory. As Popper pointed out, we solve the problem of induction by rational criticism. Reiterating an error, as a drumbeat mantra, may hammer it home in our worldviews, but that hardly suffices to ground it. No-one has seriously argued that inductions deliver absolute certainty, but they often deliver such high confidence that we trust our lives to them. And, in a vast world of experience observations do exist, are sufficiently objective that we accept many deliverances as facts beyond reasonable doubt and demand -- for good reason -- that explanations account for them adequately. Yes, we must be open to correction on further analysis or experience, but for that Popper et al have nothing material to add to Newton in Opticks, Query 31, 1704:
As in Mathematicks, so in Natural Philosophy, the Investigation of difficult Things by the Method of Analysis, ought ever to precede the Method of Composition. This Analysis consists in making Experiments and Observations, and in drawing general Conclusions from them by Induction, and admitting of no Objections against the Conclusions, but such as are taken from Experiments, or other certain Truths. For [speculative] Hypotheses [not supported by empirical evidence] are not to be regarded in experimental Philosophy. And although the arguing from Experiments and Observations by Induction be no Demonstration of general Conclusions; yet it is the best way of arguing which the Nature of Things admits of, and may be looked upon as so much the stronger, by how much the Induction is more general. And if no Exception occur from Phaenomena, the Conclusion may be pronounced generally. But if at any time afterwards any Exception shall occur from Experiments, it may then begin to be pronounced with such Exceptions as occur. By this way of Analysis we may proceed from Compounds to Ingredients, and from Motions to the Forces producing them; and in general, from Effects to their Causes, and from particular Causes to more general ones, till the Argument end in the most general. This is the Method of Analysis: And the Synthesis consists in assuming the Causes discover'd, and establish'd as Principles, and by them explaining the Phaenomena proceeding from them, and proving the Explanations. [[Emphases added.]
(Onlookers, sometimes we need to remind ourselves that our forebears were not naive dummies.) 4: saying evolution is merely chance and necessity is like saying someone defeated by a chess program was defeated by electrons. While this is also true, you are appealing to a specific level of reductionism. Evolutionary processes create the knowledge of how to build adaptations, which is non-explanatory in nature. And I mean genuinely create knowledge, rather than having already existed in some form. Specifically, conjecture, in the form of genetic variation random to any specific problem to be solved, and refutation, in the form of natural selection. The result in non-explanatory knowledge. Here, first I have reported the general views of the evolutionary materialists, which you seem to be supporting. Of course, the alleged blind chance and necessity driven causal process was never observed to create body plans, but it seems convenient to try to reduce what we can and do directly observe to what we did not and cannot. in short, the above seems to work rhetorically by a version of: who you gonna believe, us -- duly dressed in the holy lab coats -- or yer lyin' eyes. It further needs to be noted that functionally specific, complex organisation and information -- contrary to confident manner declarations on what happened in the unobserved deep past -- has but one observed causal factor sufficient to account for it. Design by skilled and knowledgeable intelligence. Similarly, on the needle in the haystack type analysis in light of the constraints on sampling -- cf the just above to R0bb -- this pattern of observation makes good sense analytically. So, we have every epistemic right to first conclude that it is a reliable inference to hold that FSCO/I is a reliable sign of design as material causal factor. Thus, objects embedding FSCO/I were credibly designed. And, trying to get rid of the reality of reliable observations as a decisive test in science, is a case of sawing off the branch on which we all must sit. Nor, am I impressed by label and dismiss tactics. 5: I’d agree that only people can create explanatory knowledge. I’d also agree that there are explanations for useful non-explanatory knowledge, even if it isn’t explicitly presented. So, as people, we can be cognitive of explanations for non-explanatory knowledge whenever we discuss it. This, however, doesn’t mean that knowledge of how to build organisms, which is found in the genome in a non-explanatory form, cannot be created in the absence of people. We have no good reason to infer or conclude that people exhaust the set of those capable of explanation. Similarly, what we have evidence of is that design of FSCO/I rich things is rooted in knowledgeable and skilled intelligence, not humanity as such. Not all people can design and build and program a computer from scratch. And, specifically, design by skilled and knowledgeable intelligence is the ONLY empirically warranted explanation for algorithmic information and data structures (as well as content) loaded in string data structures and executed by properly arranged executing machinery. It is beginning to look like you are trying to duck the force of evidence by resort to convoluted and flawed philosophical speculation. That falls before Newton's glorified common sense that speculative notions unsupported by relevant empirical findings should have no force in the face of such evidence. 6: hat’s the myth that Popper was referring to. Inference is defined as “a conclusion is inferred from multiple observations”. This implies observations can make a theory more probable via observations. But it cannot. Again, you’ve got it backwards. Nope, YOU have it backwards, and are erecting strawman caricatures then knocking them down. Observations in an apparent pattern that is puzzling calls for unifying explanation. Alternative possibilities are put and are tested against observations, coherence and issues of simplicity vs ad hocness or simplisticness etc. The best is put forth as an empirically supported explanation and tested against predictive ability. In the case of the photo effect, the wavelength threshold where longer wavelength light of high intensity has no effect, but weak light of short enough wavelength does, and relationships with the kinetic energy of emitted photons, in the hands of the very Einstein you go on to cite inappropriately, in 1905, led to the pivotal breakthrough for quantum theory. There is nothing in Einstein here:
theory can thus be recognized as erroneous [unrichtig] if there is a logical error in its deductions, or as incorrect [unzutreffend] if a fact is not in agreement with its consequences. But the truth of a theory can never be proven. For one never knows that even in the future no experience will be encountered which contradicts its consequences; and still other systems of thought are always conceivable which are capable of joining together the same given facts.
. . . that is not already in principle in Newton. And, you somehow keep imagining that I am suggesting that theories are proved true beyond correction when I am using a specific modifier to say the opposite: inference to best CURRENT explanation. please stop knocking over strawmen. And notice that while many explanations have been later overturned, it is also true that many explanations have not. So, we must be willing to accept that we make mistakes but that sometimes we can get it right too, Indeed, the very conclusion that we make mistakes is an inductive conclusion that has not been overturned. KFkairosfocus
September 1, 2012
September
09
Sep
1
01
2012
02:08 AM
2
02
08
AM
PDT
CR: We cannot observe causes. As such, all we can do is criticize theories with the intent of finding and correcting errors. KF: First, it is not true that we cannot observe causal factors at work…. These are not equivalent statements as I said we cannot observe *causes*. Furthermore, observations are based on hard to vary explanations for how we acquired them. So, we cannot positively support any particular theory or conclude it's more probable via observations. We're left with rational criticism. KF: … or trace them from their characteristic outcomes. That is how scientific laws are established after all. No, it's not. By using the word "trace" you appear to suggesting we can mechanically extrapolate theories from observations. But this isn't possible as we get more out of a theory than its observations. This simply doesn't add up. KF: Similarly, it is not an unfair challenge to demand that a claimed causal factor held to be sufficient to cause an effect be demonstrated as actually doing so in our observation. That boils down to that in science — and common-sense day to day life — claims are subject to empirical observational tests. Science is not primarily about "stuff you can see" as we use the unseen to explain the seen. Are you suggesting we can directly observe unseen things? How does that work, in detail? Or perhaps you're suggesting we have some other infallible source regarding unseen things? Again, induction and criticism are not the same thing. Observations cannot not positively support a theory. As Popper pointed out, we solve the problem of induction by rational criticism. Furthermore, saying evolution is merely chance and necessity is like saying someone defeated by a chess program was defeated by electrons. While this is also true, you are appealing to a specific level of reductionism. Evolutionary processes create the knowledge of how to build adaptations, which is non-explanatory in nature. And I mean genuinely create knowledge, rather than having already existed in some form. Specifically, conjecture, in the form of genetic variation random to any specific problem to be solved, and refutation, in the form of natural selection. The result in non-explanatory knowledge. Does your account suggest this new knowledge existed at the outset? If so, it's creationism. Does your account suggest this knowledge "just appeared"? If so, it represents spontaneous generation, as found in aspects of Lamarckism. Is an account for this knowledge absent? if so, it's a bad explanation because it actually fails to solve the problem at hand. What is ID's account for how this knowledge was created? I'd agree that only people can create explanatory knowledge. I'd also agree that there are explanations for useful non-explanatory knowledge, even if it isn't explicitly presented. So, as people, we can be cognitive of explanations for non-explanatory knowledge whenever we discuss it. This, however, doesn't mean that knowledge of how to build organisms, which is found in the genome in a non-explanatory form, cannot be created in the absence of people. KF: A classic is how Newton inferred to the Universal law of gravitation, cf here. Another, is how Einstein inferred on the threshold effect with the photo electric effect, to the reality of photons and the threshold equation that is in large part responsible for his Nobel Prize. That's the myth that Popper was referring to. Inference is defined as "a conclusion is inferred from multiple observations". This implies observations can make a theory more probable via observations. But it cannot. Again, you've got it backwards. To quote from an essay Einstein wrote in late 1919….
A theory can thus be recognized as erroneous [unrichtig] if there is a logical error in its deductions, or as incorrect [unzutreffend] if a fact is not in agreement with its consequences. But the truth of a theory can never be proven. For one never knows that even in the future no experience will be encountered which contradicts its consequences; and still other systems of thought are always conceivable which are capable of joining together the same given facts.
IOW, there are an infinite number of yet to be conceived explanations which are also compatable with the same observations. We cannot factor these un-conceived explanations into a calculus of probability, which makes it invalid as a means of deeming a theory more probable. It's simply not applicable in the sense you're implying. However, I'm a critical rationalist. As such, I'm open to you formulating a "principle of indiction" that actually works *in practice*. However, no one has as of yet. KF: Now, obviously, scientific knowledge is provisional in key respects. That’s fine, warrant — notice the distinction in terminology — comes in degrees, as has been known for millennia. Obviously? What about the empiricists, logical positivists and the like? Was it obvious to them? If you think it's obvious that knowledge must be justified by some ultimate source or arbiter, then it would come as no surprise that you think Darwinism cannot create the non-explanatory knowledge of how to build adaptations. Any such argument is parochial in nature as indicates one cannot recognize their conception of human knowledge as an idea that itself would be subject to criticism. KF: Where there is sufficient warrant that something is a best explanation and is empirically reliable, it is reasonable to use it in responsible contexts. In some cases, one would be irresponsible to ignore its force. Which is where I started out. Epistemology is an explanation about how knowledge is created. Whether “design” is the best explanation is based on implicit assumptions about knowledge, such as if it is complex, whether it is genuinely created, etc. The best explanation doesn't refer to a theory proven more likely by observations (which isn't possible), it means an explanation that has withstood the most rational criticism. A theory that doesn't stick it neck out, such as one based on an abstract designer that has no defined limitations, is a bad explanation because it cannot be significantly criticized. Why don't you start out by explanaing how knowledge is created, then point out how evolutionary processes do not fit that explanation. Please be specific.critical rationalist
August 31, 2012
August
08
Aug
31
31
2012
10:29 PM
10
10
29
PM
PDT
F/N: Someone may wish to object on how hill-climbing in a zone of function incrementally searches a much smaller space and so achieves wonders. The problem as has been highlighted ever so many times, is that such starts within a zone of function. In the case of life forms, until we have an embryologically and ecologically feasible body plan, we are not within a zone of function, where that requires not 500 but more like 10 mn to 100+ million bits, on the scope of genomes involved in observable life forms. And, as the OOL challenge indicates, this begins with the very first, unicellular body plan, with 100,000 - 1 mn bits or so and no existing gated, encapsulated, metabolic automaton with a digital coded tape using von Neumann self replicator, for these are what have to be explained. Explained by blind chance and necessity step by step from a warm little pond or similar plausible initial environment. And the assertion of yesterday that such is an observed reality, is blatantly false. KFkairosfocus
August 31, 2012
August
08
Aug
31
31
2012
04:27 AM
4
04
27
AM
PDT
Folks: This morning, we are not dealing with a poster child of irresponsible and outrageous conduct, we are dealing with long time UD commenter and design critic [he has been around for years and years, I think at least as long as I have been and has IIRC, contributed valuable mathematical points used by WmAD etc], R0bb -- note the zero, who has consistently been reasonable in behaviour and has tried to address issues on the merits. A few days ago, Dr Dembski posted at ENV (and notified here) a Made-Simple on the law of conservation of info [LCI], so-called. (I won't get into debates on terminology, let us just say that his tends to be idiosyncratic.) Clipping a key summary, this is Dembski's essential point:
it's possible to characterize search in a way that leaves the role of teleology and intelligence open without either presupposing them or deciding against them in advance. Mathematically speaking, search always occurs against a backdrop of possibilities (the search space), with the search being for a subset within this backdrop of possibilities (known as the target). Success and failure of search are then characterized in terms of a probability distribution over this backdrop of possibilities, the probability of success increasing to the degree that the probability of locating the target increases . . . . Take an Easter egg hunt in which there's just one egg carefully hidden somewhere in a vast area. This is the target and blind search is highly unlikely to find it precisely because the search space is so vast. But there's still a positive probability of finding the egg even with blind search, and if the egg is discovered, then that's just how it is. It may be, because the egg's discovery is so improbable, that we might question whether the search was truly blind and therefore reject this (null) hypothesis. Maybe it was a guided search in which someone, with knowledge of the egg's whereabouts, told the seeker "warm, warmer, no colder, warmer, warmer, hot, hotter, you're burning up." Such guidance gives the seeker added information that, if the information is accurate, will help locate the egg with much higher probability than mere blind search -- this added information changes the probability distribution . . . . The Easter egg hunt example provides a little preview of conservation of information. Blind search, if the search space is too large and the number of Easter eggs is too small, is highly unlikely to successfully locate the eggs. A guided search, in which the seeker is given feedback about his search by being told when he's closer or farther from the egg, by contrast, promises to dramatically raise the probability of success of the search. The seeker is being given vital information bearing on the success of the search. But where did this information that gauges proximity of seeker to egg come from? Conservation of information claims that this information is itself as difficult to find as locating the egg by blind search, implying that the guided search is no better at finding the eggs than blind search once this information must be accounted for.
Now, R0bb has challenged this, here at UD and over at TSZ. (I would welcome a collection of TSZ links to his threads there.) By comment 13 in the UD notice thread, he outlined (where, my initial comments are further indented):
Some things to note about the LCI, independent of any ID claims: 1) Contrary to Dembski’s claim, the LCI is not universal. Counterexamples are easy to find.
[a --> In fact, let us note from Dembski at ENV:
>> the important issue, from a scientific vantage, is not how the search ended but the probability distribution under which the search was conducted. You don't have to be a scientist to appreciate this point. Suppose you've got a serious medical condition that requires treatment. Let's say there are two treatment options. Which option will you go with? Leaving cost and discomfort aside, you'll want the treatment with the better chance of success. This is the more effective treatment. Now, in particular circumstances, it may happen that the less effective treatment leads to a good outcome and the more effective treatment leads to a bad outcome. But that's after the fact. In deciding which treatment to take, you'll be a good scientist and go with the one that has the higher probability of success.>>
b --> That is, WmAD is speaking of the expected or "average" result, which tends to overwhelm fluctuations once systems have sufficient scale.]
2) Active information is sensitive to the definitions of the lower- and higher-level search spaces, which are modeling choices. Any observed process can be modeled such that it violates the LCI, and any observed process can be modeled such that the LCI holds.
[c --> see more detailed remarks below]
3) Even with models for which the LCI holds, there is still no guarantee that active information won’t be generated by chance. In fact, it’s easy to come up with a scenario in which we expect this to occur.
[d --> this is the issue of fluctuations, which becomes much less relevant for systems at significant scope, cf. below]
Any of the above can be conclusively demonstrated, albeit not easily in a blog comment . . . . A) Given conditions under which the LCI holds, intelligent designers are no less constrained by the LCI than nature is, since the LCI is strictly mathematical. So the LCI can’t be employed to distinguish an intelligent cause from a natural cause.
[e --> We can start with an empirical counter example. When they were looking for Atocha etc, treasure hunters spent time int eh Archive of the Indies, looking hard for clues as to the specific location to narrow down search to. f --> The way that an intelligent cause acts is by information, knowledge, skill, procedures and heuristics that lead to avoiding trial and error scanning of vast config spaces, starting with searching for the right answer to sums in grade school. g --> In so working, intelligence often leaves traces that are characteristic of intelligence at work, on simple inductive analysis, i.e we have identifiable signs such as FSCO/I. Even though it can be faked, the best explanation for deer tracks in a forest is deer. h --> this is backed up by the needle in haystack type challenge faced by blind trial and error. Engineers almost never seek solutions by blind trial and error, they narrow down the scope of search intelligently,a nd then may look at a reasonable number of alternatives. i --> And so, it is highly relevant to ask on an inference to best explanation basis, whether the signs of intelligence, chance and necessity are visible in an object or process. And, we can successfully use this in many many many cases that are not controversial j --> THE CONCLUSION SUGGESTED IN THE NAME OF MATHEMATICS IS FALLACIOUS. This is an improper use of authority.]
B) Given #1 above, claims that the LCI applies to Darwinian evolution must be justified, which would involve mathematically modeling Darwinian evolution. This is something that no IDist has done, AFAIK.
[k --> As long as such evo is a search, and as long as it depends on chance for variations to be culled out by differential reproductive success, it is amenable to examination on the credibility of chance based trial and error search, and analysis on signs. l --> We are of course, also neatly avoiding the OOL issue, which takes the differential reproductive success matter off the table, and shows the relevance of FSCO/I as a sign of intelligence.]
C) The Principle of Indifference (also called the Principle of Insufficient Reason) is a heuristic for assigning prior epistemic probabilities in the face of ignorance. Assuming, without updating the prior, that the prior reflects reality is literally an argument from ignorance.
m --> When we toss a coin or a die, we normally infer that there is an even chance of the possibilities coming up trumps, so we have 1/2 for H or T or 1/6 for each die possibility. This can be revised, in light of further considerations, so it is not unreasonable. n --> there is an interesting discussion of the issue of the principle of universality by Avi Sion, that I now clip, as it gives a salutory lesson on what is reasonable and what is unreasonable and selectively hyperskeptical:
this “principle” may only be regarded as a heuristic idea, a rule of thumb, a broad but vague practical guideline to reasoning . . . . We might also ask – can there be a world without any ‘uniformities’? A world of universal difference, with no two things the same in any respect whatever is unthinkable. Why? Because to so characterize the world would itself be an appeal to uniformity. A uniformly non-uniform world is a contradiction in terms. Therefore, we must admit some uniformity to exist in the world. The world need not be uniform throughout, for the principle of uniformity to apply. It suffices that some uniformity occurs. Given this degree of uniformity, however small, we logically can and must talk about generalization and particularization. There happens to be some ‘uniformities’; therefore, we have to take them into consideration in our construction of knowledge. The principle of uniformity is thus not a wacky notion, as Hume seems to imply . . . . The uniformity principle is not a generalization of generalization; it is not a statement guilty of circularity, as some critics contend. So what is it? Simply this: when we come upon some uniformity in our experience or thought, we may readily assume that uniformity to continue onward until and unless we find some evidence or reason that sets a limit to it. Why? Because in such case the assumption of uniformity already has a basis, whereas the contrary assumption of difference has not or not yet been found to have any. The generalization has some justification; whereas the particularization has none at all, it is an arbitrary assertion. It cannot be argued that we may equally assume the contrary assumption (i.e. the proposed particularization) on the basis that in past events of induction other contrary assumptions have turned out to be true (i.e. for which experiences or reasons have indeed been adduced) – for the simple reason that such a generalization from diverse past inductions is formally excluded by the fact that we know of many cases that have not been found worthy of particularization to date. That is to say, if we have looked for something and not found it, it seems more reasonable to assume that it does not exist than to assume that it does nevertheless exist. Admittedly, in many cases, the facts later belie such assumption of continuity; but these cases are relatively few in comparison. The probability is on the side of caution. In any event, such caution is not inflexible, since we do say “until and unless” some evidence or argument to the contrary is adduced. This cautious phrase “until and unless” is of course essential to understanding induction. It means: until if ever – i.e. it does not imply that the contrary will necessarily occur, and it does not exclude that it may well eventually occur. It is an expression of open-mindedness, of wholesome receptiveness in the face of reality, of ever readiness to dynamically adapt one’s belief to facts. In this way, our beliefs may at all times be said to be as close to the facts as we can get them. If we follow such sober inductive logic, devoid of irrational acts, we can be confident to have the best available conclusions in the present context of knowledge. We generalize when the facts allow it, and particularize when the facts necessitate it. We do not particularize out of context, or generalize against the evidence or when this would give rise to contradictions . . .
o --> So, we are back to the issue of genuinely understanding induction and in particular inference to best current explanation. p --> On the evidence in hand, it is reasonable and even responsible, to infer that if we have no good reason to presume a non-uniform distribution, a uniform one reasonably applies, then adjust in light of evidence. E.g. we speak of bent or two-Head coins, and of loaded dice. But, we normally treat coins and dice as fair till further evidence emerges. q --> And, again, in statistical thermodynamics, the principle of indifference applied to the set of microstates is a longstanding and highly successful practice, it is not a suspect innovation of design thinkers. r --> So, to simply raise a dismissive point is not going to be good enough.
I suspect, that R0bb's background does not include statistical thermodynamics, and so he tends to miss the significance of what happens when we deal -- not with toy examples -- with realistically complex systems that have very large sets of possible outcomes or states etc. That which is logically possible and mathematically demonstrable may well be physically irrelevant and unobservable on the gamut of a lab or a solar system of 10^57 atoms or a cosmos of 10^80 atoms. This is in fact the foundation of the second law of statistical thermodynamics, statistical form, as is raised in Q 5 above:
5: Is it reasonable to compare this general analysis to the grounding of the statistical form of the second law of thermodynamics, i.e. that under relevant conditions, spontaneous large fluctuations from the typical range of the bulk of [microstate] possibilities will be vanishingly rare for reasonably sized systems? If you think not, why not?
Boiled down, the abstract possibility that lotteries are winnable does not imply that such can practically be won on the gamut of available search resources. I replied to R0bb in the LCI thread overnight and just now, and will now clip from the two comments where that occurred: ___________ LCI, 38: >> Wiki, on sampling frame vs. population:
In statistics, a sampling frame is the source material or device from which a sample is drawn.[1] It is a list of all those within a population who can be sampled, and may include individuals, households or institutions . . . . In the most straightforward case, such as when dealing with a batch of material from a production run, or using a census, it is possible to identify and measure every single item in the population and to include any one of them in our sample; this is known as direct element sampling.[1] However, in many other cases this is not possible; either because it is cost-prohibitive (reaching every citizen of a country) or impossible (reaching all humans alive).
In short, a population of possibilities is often sampled, and that sample may come from a defined subset that may or may not bias outcomes. In the case of a config space W [Omega will not print right], we may set up a frame F, that contains a zone of interest, T. If it does so, the odds of a sample of size s hitting T in F will be very different from that of s in W. That is simple to see. It may be harder to see that, say, a warmer/colder set of instructions, is such a framing. But obviously, this is telling whether one is trending right or wrong. That is, hill-climbing reframes a search task in ways that make it much easier to hit T. Now, multiply by three factors:
a: s is constrained by accessible resources, in such a way that a blind, random search on W is maximally unlikely to hit T. b: by suitably reframing to a suitable F, s is now much more likely to hit T. c: But by reframing to G, s is now even more unlikely to hit T than a blind random search on W, as T is excluded from G.
Now, obviously, moving from W to F is significant. In effect F maps a hot zone that drastically enhances the expected outcome of s. But, that implies that picking your F is itself a result of a higher order challenge. For if T is small and isolated in W, if we pick a frame at random, a type-G is far more likely than a type-F. So, the search for a frame is a highly challenging search task itself. Indeed, in the case of interest, comparable to the search for T in W itself. The easiest way to get a type-F is to use accurate information. For instance, those who search for sunken Spanish Treasure fleet ships often spend more time in the Archive of the Indies in Spain, than in the field; that is how significant finding the right frame can be. Where also, that information that gets us to a type-F search rather than the original type-W one. Indeed, the Dembski-Marks model boils down to measuring the typical improvement provided by advantageous framing. This, by in effect converting the jump in estimated probability in moving frame from W to F into an information metric. (Probabilities are related to information, as standard info theory results show.) That, contrary to dismissive remarks, is reasonable. The relevance of all this to the debates over FSCO/I is obvious. When we have a functional object that depends for functionality on the correct arrangement of well-matched parts, this object can be mapped in a field of possibilities W, in zones of interest T. One way to reduce this to information, is to set up a nodes-arcs specification that WLOG can be converted into a structured set of strings. (AutoCad is used for this all the time, and the DWG file size serves as a good rule of thumb metric of the degree of complexity.) Obviously not any config of components will work. Just think about trying to put a car engine back together and getting it to work at random, or turning a random configuration of alphanumeric characters back into a functioning computer program. That is where the concept of islands of function comes from. A simple solar system level threshold for enough complexity to make the isolation of T significant is 500 bits. At that level, the 10^57 atoms of our solar system, across its lifespan of about 10^17 s on the typical timeline, at the fastest rates of chemical reactions would be able to look at maybe the equivalent to a one straw sized sample to a cubical hay bale 1,000 light years thick. That is how the frame would be naturally constrained as to scope. Even if such a bale were superposed on the Galaxy, centred on Earth — about as thick — a sample at random would (per sampling theory) be overwlelmingly likely to reflect the bulk of the distribution, straw. That is the issue of FSCO/I, and it is why the most credible causal source for it is design. >> LCI, 46: >> Earlier, I pointed out that when one searches in a space or samples it, one faces the issue of sampling frame, with potential for bias. In the search context, if one’s sampling frame is a type-F, one may drastically improve the conditional probability of finding the target sub-set of space W, T, given sample frame F, on a search-sample of scope s. But also, if the frame is a type-G instead, then one has reduced the conditional probability of successful search given sample frame G, to zero, as T is not in G. I then raised the issue that searching for a sample frame is a major challenge. I should note on a reasonable estimate of that challenge. W is the population, the set of possible configs here. The possible F’s (obviously a frame is non-unique) and G’s are obviously sub-sets of W. So, we are looking at the set of possible subsets of W, perhaps less the empty set {} in practical terms, as if one is in fact taking on a search, one will have a frame of some scope. But, for completeness that empty set would be in, and takes in the cases of no-sample. The power set of a given set of n members, of course, has 2^n members. In the case of a set of the possible configs for 500 bits, we are looking at the power set for 2^500 ~ 3.27*10^150. Then, raise 2 to that power: 2^(3.27*10^150). The scope of such a set overwhelmingly, way beyond merely astronomically, dwarfs the original set. To estimate it, observe that log x^n = n* log x. 3.27*10^150 times log 2 ~ 9.85*10^149. That is the LOGARITHM of the number. Going to the actual number, we are talking here of essentially 10 followed by 10^150 zeros, which we could not write out with all the atoms of our observed cosmos, not by a long, long, long shot. Take away 1 for eliminating the empty set, and that is what we are looking at. So, first and foremost, we should not allow toy examples that do not have anywhere near the relevant threshold scope of challenge on complexity, mislead us into thinking that the search for a successful search strategy — remember that boils down to being a framing of the sampling process — is an easy task. So, absent special information, the blind search for a good frame will be much harder than the direct blind search for the hot zone T in W. So also, if searching blindly by trial and error on W is utterly unlikely to succeed, searching blindly in the power set less 1: (2^W) – 1, will be vastly more unlikely to succeed. And, since — by virtue of the applicable circumstances that sharply constrain configs to get them to function in relevant ways — T is small and isolated in W, by far and away most of the search frames in that set will be type-G not type-F. Consequently, if a framing “magically” transforms the likelihood of search success, the reasonable best explanation for that is that it is because the search framing was intelligently selected on key information. And it is not unreasonable to define a quantity for the impact of that information, on the gap between blind search on W and search on F. Hence the concept and metrics for active information are not unreasonable on the whole, never mind whatever particular defects may be found with specific models and proposed metrics. One last point. In thermodynamics, it is notorious that for small, toy samples, large fluctuations are quite feasible. But, as the number of particles in a thermodynamic system rises to more realistic levels, the fact that he overwhelming bulk of the distribution of possibilites tends to cluster on a peak, utterly dominates behaviour. So, yes, for toy examples, we can easily enough find large fluctuations from the “average” — more properly expected, outcome. But once we go up to realistic scale, spontaneous, stochastic behaviour will normally tightly cluster on the bulk of the distribution of possibilities. Or, put another way, not all lotteries are winnable, especially the naturally occurring ones. Those that are advertised all around are very carefully designed to be profitable and winnable as the announcement of a big winner will distract attention from the typical expectation: loss. So, to point to the abstract possibility of fluctuations, especially on toy examples is distractive and strawmannish relative to the real challenge: hitting a tiny target zone T in a huge config space W, usually well beyond 2^500 in scope. As we can easily see, on the scope of resources in our solar system, the possible sample size relative to the scope of possibilities is overwhelmingly unfavourable, leading to the problem of a chance based needle in a haystack blind search exercise on steroids. (Remember, mechanical necessity does not generate high contingency, it is chance or choice that do that.) The result of that challenge is obvious all around us: the successful creation of entities that are functional, complex and dependent on specific config or a cluster of similar configs to function is best explained on design by skilled and knowledgeable intelligence, not blind chance and mechanical necessity. The empirical evidence and the associated needle in haystack or monkeys at keyboards challenges are so overwhelmingly in favour of that point that the real reason for the refusal to accept this as even “obvious,” is prior commitment to and/or indoctrination in the ideology that blind chance and necessity moved us from molecules to Mozart.>> _____________ We can see just how central the syllabus of questions in the OP is, and we can see that the objections keep running into conceptual difficulties that start with the nature of inductive reasoning. And, it is reasonable indeed to see that the blind search for a good search frame in the set 2^W -1 is often considerably more difficult than that for the space itself. So, if we have a good search frame that makes finding T in W through framing F much easier than blindly searching for T in W, that needs to be explained. And it is reasonable -- whatever technical problems may arise in practice -- to give as a metric an estimate of how much the framing improves the search, converted into information measures. KFkairosfocus
August 31, 2012
August
08
Aug
31
31
2012
04:12 AM
4
04
12
AM
PDT
CR: Let me pause and follow up after a rather long day:
You’re asking me for a demonstration to positively support the idea that a specific explanation is more probable. This is a form of justificationsm. We cannot observe causes. As such, all we can do is criticize theories with the intent of finding and correcting errors.
First, it is not true that we cannot observe causal factors at work, or trace them from their characteristic outcomes. That is how scientific laws are established after all. Similarly, it is not an unfair challenge to demand that a claimed causal factor held to be sufficient to cause an effect be demonstrated as actually doing so in our observation. That boils down to that in science -- and common-sense day to day life -- claims are subject to empirical observational tests. A classic is how Newton inferred to the Universal law of gravitation, cf here. Another, is how Einstein inferred on the threshold effect with the photo electric effect, to the reality of photons and the threshold equation that is in large part responsible for his Nobel Prize. Now, obviously, scientific knowledge is provisional in key respects. That's fine, warrant -- notice the distinction in terminology -- comes in degrees, as has been known for millennia. Where there is sufficient warrant that something is a best explanation and is empirically reliable, it is reasonable to use it in responsible contexts. In some cases, one would be irresponsible to ignore its force. And yes it means that we TRUST beyond what we can prove. What's new about that? Always been so. KFkairosfocus
August 30, 2012
August
08
Aug
30
30
2012
05:10 PM
5
05
10
PM
PDT
H'mm: Of course, we see from Venter et al that it is reasonable to expect that a sufficiently sophisticated molecular nanotech lab can do a living cell. As as been repeatedly pointed out and insistently ignored. But even that is too much. The basic thing is that we have good reason to see the only observed cause of FSCO/I, and to see why the lottery to get to it by chance and blind mechanism will not be winnable. So, what we do is we look at a key signature in cell based life and infer that the known and reliably observed cause of such, design is the best explanation. Not hard, but rather inconvenient for those who wish to insist that that which has never been seen doing such, and which runs into a major sampling theory challenge to look like a credible winner of a lottery, is what "must" have happened. KFkairosfocus
August 30, 2012
August
08
Aug
30
30
2012
04:58 PM
4
04
58
PM
PDT
For the record- Design is a mechanism: If a mecahnism is a method or means of doing something and design is the way is which something is planned and made then it is obvious and undeniable that design is a mechanism. And in the context of ID vs the ToE mechanism pertains to a method or means of doing something- For example according to the ToE an accumulation of genetic accidents is the method or means (ie the way) by which the diversity of living organisms arose. And according to ID they evolved by design, as in a targeted search and/ or built-in responses to environmental cues. Our mechanism of a targeted search has been demonstrated to be very powerful. Your mechanism of accumulations of genetic accidents is good at breaking things. You lose by observation and testing.Joe
August 30, 2012
August
08
Aug
30
30
2012
04:36 PM
4
04
36
PM
PDT
Gane?Joe
August 30, 2012
August
08
Aug
30
30
2012
04:18 PM
4
04
18
PM
PDT
Toronto- Gane's over. You want to discuss how we know the abilities of designers and I told you. We know the abilities of your position's proposed mecahnsims and they don't appear to be capable of constructing anything useful. AGAIN: However if YOU could step forward and demonstrate your position’s proposed mechanism can account for what we say is designed, you win. But you will never do such a thing and that bothers you. Until then, bye-bye, nothing left to respond to. However I may choose to harpoon your predictable hissy-fit over on my blog.Joe
August 30, 2012
August
08
Aug
30
30
2012
04:18 PM
4
04
18
PM
PDT
And the "reply":
Hey Joe, how do we know “no designer required evolution” would result in us? Answer = us!
It didn't result in us as we are not the result, just another link and we are not alone on this planet. How do we know "no designer required evolution" can do anything? Answer = we don't as there isn't any evidence that it can and it bothers you.Joe
August 30, 2012
August
08
Aug
30
30
2012
03:29 PM
3
03
29
PM
PDT
Toronto did give us this:
Do you have empirical support that the designer, ID’s claimed cause of life, had adequate ability to cause the effect we call life?
Hey Toronto- how do we know someone had the ability to build Stonehenge? Answer = Stonehenge! How do we know someone had teh ability to build the Antikythera mechanism? Answer = the Antikythera mechanism! As I told you before designers, successful designers anyway, have the ability to design what it is they designed. However if YOU could step forward and demonstrate your position's proposed mechanism can account for what we say is designed, you win. But you will never do such a thing and that bothers youJoe
August 30, 2012
August
08
Aug
30
30
2012
03:05 PM
3
03
05
PM
PDT
CR: Busy. A claimed cause of a effect must have empirical support that it is adequate. No observation, no empirical support. KFkairosfocus
August 30, 2012
August
08
Aug
30
30
2012
01:05 PM
1
01
05
PM
PDT
Until I have more time, see the following presentations on Critical Rationalism and Probablty.critical rationalist
August 30, 2012
August
08
Aug
30
30
2012
08:59 AM
8
08
59
AM
PDT
KF, I'm short on time today, but noticed this from your previous comment. KF : You need to explain on demonstrated observed cause in the present how codes, code string structures, algorithms and functionally specific and complex data expressed in codes, execution machinery properly organised, and the associated gated, encapsulated, metabolic automaton with a vNSR originated by chance and necessity in some warm little pond or the like environment. You're asking me for a demonstration to positively support the idea that a specific explanation is more probable. This is a form of justificationsm. We cannot observe causes. As such, all we can do is criticize theories with the intent of finding and correcting errors. From the following essay on Willam Bartley's work....
3. Responses to the dilemma of the infinite regress versus dogmatism In the light of the dilemma of the infinite regress versus dogmatism, we can discern three attitudes towards positions: relativism, “true belief” and critical rationalism [Note 3] Relativists tend to be disappointed justificationists who realise that positive justification cannot be achieved. From this premise they proceed to the conclusion that all positions are pretty much the same and none can really claim to be better than any other. There is no such thing as the truth, no way to get nearer to the truth and there is no such thing as a rational position. True believers embrace justificationism. They insist that some positions are better than others though they accept that there is no logical way to establish a positive justification for an belief. They accept that we make our choice regardless of reason: "Here I stand!". Most forms of rationalism up to date have, at rock bottom, shared this attitude with the irrationalists and other dogmatists because they share the theory of justificationism. According to the critical rationalists, the exponents of critical preference, no position can be positively justified but it is quite likely that one (or more) will turn out to be better than others in the light of critical discussion and tests. This type of rationality holds all its positions and propositions open to criticism and a standard objection to this stance is that it is empty; just holding our positions open to criticism provides no guidance as to what position we should adopt in any particular situation. This criticism misses its mark for two reasons. First, critical rationalism is not a position. It is not directed at solving the kind of problems that are solved by fixing on a position. It is concerned with the way that such positions are adopted, criticised, defended and relinquished. Second, Bartley did provide guidance on adopting positions; we may adopt the position that to this moment has stood up to criticism most effectively. Of course this is no help for people who seek stronger reasons for belief, but that is a problem for them, and it does not undermine the logic of critical preference.
So, despite not being a justificationst, I'm not a relativist. From the Critical Rationalism entry on Wikipedia....
By dissolving justificationism itself, the critical rationalist regards knowledge and rationality, reason and science, as neither foundational nor infallible, but nevertheless does not think we must therefore all be relativists. Knowledge and truth still exist, just not in the way we thought.
Induction is illogical in that we cannot justify theories or make them more probable via observations. Yet, you're asking me to do just that.critical rationalist
August 30, 2012
August
08
Aug
30
30
2012
08:54 AM
8
08
54
AM
PDT
TSZ appears to riding high on R0bb's failed destruction of LCSI- they don't understand that it is a failed attempt. atbc- I went there a few days ago and I still feel slimey. But it is a given they are having fun in their swamp. TWT's site- forget it, I won't go there.Joe
August 30, 2012
August
08
Aug
30
30
2012
06:44 AM
6
06
44
AM
PDT
PS: Any volcanic eruptions at TSZ and the outright hate sites yet?kairosfocus
August 30, 2012
August
08
Aug
30
30
2012
06:36 AM
6
06
36
AM
PDT
Joe, Yup. And that shape-recognition serves as a handshaking protocol to load the right tRNA with the right AA. There is no force of nature that makes any given AA couple to the particular CCA coupler, it is based on a handshaking and complex key-lock fitting protocol. All of which should sound rather familiar. And of course, you will see why I used "loading enzyme" in place of some chemical/biochemical gobbledygook. All those chemical prefixes and suffixes and terms have a meaning but one strictly for the initiated; a functional description is good enough for practical purposes. Also, I should note that in the ribosome, the tRNA acts as well as a position-arm device with a tool tip used as a pick-place delivery vehicle. KFkairosfocus
August 30, 2012
August
08
Aug
30
30
2012
06:26 AM
6
06
26
AM
PDT
KF, Yes that CCA is universal but the rest of the molecule is not. And allegedly, if I remeber correctly, that is what helps determine what amino acid goes with which tRNA. The Aminoacyl tRNA synthetase's are different- each has its own tRNA (for the most part)Joe
August 30, 2012
August
08
Aug
30
30
2012
05:07 AM
5
05
07
AM
PDT
F/N: oh, yes those who are hot and bothered on how modern GA's are not explicitly targetted searches need to read the rest of the story here in the much despised 101 corrective, e.g. points ix to xiv in succession. Starting in an island of function that comes from a much bigger space of possibilities overwhelmingly dominated by seas of non-function begs big questions. In short you have a mechanism for designed adaptation to fit niches, not a mechanism for the origin of body plans. KFkairosfocus
August 30, 2012
August
08
Aug
30
30
2012
04:55 AM
4
04
55
AM
PDT
Joe, looks like AM needs to read even Wiki a little more closely, paying attention to the universal coupler CCA end, here on. I have made a bit fuller note at 52. KFkairosfocus
August 30, 2012
August
08
Aug
30
30
2012
04:51 AM
4
04
51
AM
PDT
Folks: Dr Who is today's poster-child no 1, for this parody -- he CANNOT have meant this seriously -- of abductive reasoning (with a healthy dash of strawman false statement after repeated correction on the side):
Let’s do abductive reasoning. Life is a chemical phenomenon.
[a --> Life USES Chemical phenomena, much like how engineered systems all around us use chemical and physical phenomena all the time. To say life IS a Chemical phenomenon begs huge questions and this is in the first line of the argument. The intended abduction fails to be logically credible at the outset.]
Unintelligent chemical processes are known to create chemical phenomena. Humans can intelligently design chemical phenomena.
[b --> Notice, the concedes the point but diverts from a repeated correction. In the case of engineering it is not merely being human that counts, intelligence crystallised and focussed in knowledge and skill are pivotal. So we know that "intelligence" in the relevant skilled design context is not equivalent to "human." But, never mind repeated correction, evo mat advocates routinely pretend that they can equate the two.]
Humans were not present on the early earth when life first appeared.
[c --> True but misdirected. The issue is, is there a characteristic and empirically reliable sign of intelligent cause by design, as opposed to blind chance and/or mechanical necessity? That is shown separately for FSCO/I, on direct induction and on the needle in haystack calculation that per sampling theory, makes the chance based arrival at shores of an island of function so maximally improbable as to be unobservable. Notice the artfully unanswered Q's on this.]
No other intelligent beings are known to be able to intelligently design chemical phenomena, or to have been present on the early earth.
[d --> More exactly, we have not directly observed such. Which is why we are in an inference to best explanation context, on identified signs, just as Geologists try to reconstruct past processes that account for a peculiarly shaped river valley, or meanders etc etc. That we have not seen a phenomenon or event directly does not imply that it is impossible for it to happen.]
Therefore, I infer that unintelligent chemical processes are the best explanation for the origin of life.
[e --> Strawman. The material issue of searching the config space to get to FSCO/I has been dodged, by using a red herring on human intelligence led out to the caricature of the design argument implied in the above. This is telling us a lot about evo mat advocates having no cogent answer on the merits to the issue of inference on reliable signs. When you duck the point of an exam question, you get an F.]
I hypothesise that chemical reactions and chemical evolution led to the first life form(s).
[f --> In a context where he has not shown that -- in the present, where we can observe, such unintelligent forces can and do give rise to FSCO/I involving inter alia digital codes, algorithms, execution machinery properly organised, etc etc. But these phenomena are central to the workings of cell based life, and we have one routinely observed cause of such, for good reason having to do with search space challenges: knowledgeable and skilled intelligence working by purposeful choice contingency.]
Kairosfocus hypothesises that non-living intelligent designers created the first life forms.
[g --> This is now so much of an insistent distortion in the teeth of repeated correction and duties of care to fairness and accuracy that it is a lie, a statement in willful disregard to the truth, hoping to profit by it being perceived as the truth. h --> To see an example of why that is so, let us go above to that ever so unlucky Q 13:
13: Is it fair or not fair to suggest that on what we have already done with digital technology and what we have done with molecular nanotech applied to the cell, it is credible that a molecular nanotech lab several generations beyond Venter etc would be a reasonable sufficient cause for what we see? If not, why not? [In short, the issue is: is inference to intelligent design specifically an inference to “supernatural” design? In this context, what does “supernatural” mean? “Natural”? Why do you offer these definitions and why should we accept them?]
i --> Why do I raise this? Because, ever since Thaxton et al in TMLO in 1984, design thinkers have acknowledged that design detection methods are detecting an action based on its characteristic signs, not whodunit. Thus, I have admitted that a molecular nanotech lab a few generations beyond Venter et al, would be a SUFFICIENT cause for the phenomena we see, in a context where we see in action a lot of first steps to complete design and building of a living cell or crucial components from scratch. j --> Of course, a sufficient cause is not the same as the actual cause acting as implication is not the same as equivalence. So, for the specific design of cell based LIFE ON EARTH and in immediate environs, we need not infer to any cause that goes beyond such a nanotech lab. k --> I have also pointed out repeatedly, that where a bigger issue comes up is at the next level, once we see that he cosmos we live in is on dozens of dimensions, fine tuned for C-chemistry cell based life, starting with the fact that per relevant fine tuned nuclear physics, the first four most abundant elements are H, He, C & O, with N coming up close. That gets us to stars, the rest of the atomic table of elements in stars, to water, to organic chemistry, to carbohydrates, to fats and to proteins. In short -- follow up the link and onward linked more detailed and more advanced discussions -- there is serious reason to infer to design of our observed cosmos, even through a multiverse speculation. (E.g. the "cosmos bakery" to get so fine tuned an operating point will most reasonably be just as much a fine tuned phenomenon. Similarly, if a lone fly on a wall is nailed by a bullet out of nowhere that points to tack driving rifle and first class marksman, never mind if down the wall a bit are stretched carpeted with those disgusting insects so that any bullet hitting there would have made fly paste. This is of course yet another case of the isolated islands of function issue on chance contingency by sampling at random, vs choice contingency by intelligence.) l --> In addition, the observed cosmos credibly had a beginning and shows other signs of being contingent. That points to some worldview level logical issues -- DW has raised a worldview level issue (as I have had to already point out to him) -- for a contingent being is not self-explanatory. There is a switch on/off factor that is external to it. m --> That is, it has a cause, and is dependent on something that must be "turned on" for it to be there and to continue to be there. Just as, a fire needs to have heat, fuel, a heat-evolving chain reaction, and oxidiser, knock out any and it either will not start or will go out. n --> But the beginning of the cosmos multiplied by its evident fine tuning points to something else at causal root, a kind of being that has no turn-on switch, i.e. no external causal dependencies, the necessary being. Such a being -- very broad sense -- is a necessary being, which has no beginning and cannot cease to be. As I often point out the truth expressed in 3 + 2 = 5 is an example. o --> But there is another candidate, one that is highly important in intellectual history and in the lives of millions: a mind, one that is immaterial and eternal. Where we are seeing another answer to the question, what is life: a self-moved (notice the reflexivity and implied looping) purposeful entity is living. p --> In this case [COSMOLOGICAL origins], an eternal mind, with the purpose, knowledge, skill and power to create one or more cosmi. Which sounds a lot like a very familiar figure, the architect, builder and maker of heaven and earth, aka God. q --> Yes, you ask a philosophically loaded question, you move into the province of philosophy, where the method is comparative difficulties and no serious alternative can honestly be locked out of the discussion. r --> And, the common dismissal of God as an irrational option runs into a major problem: millions across time and space including leading figures in the history of our civilisation and science (try Pascal's night of fire Nov 23 1654 as just one instance) claim to have met and been transformed by God, so much so that if they are ALL delusional this implies the utter unreliability of the human mind, including that of those who deny God. So, it is not wise to saw off the branch on which one must sit to reason [which, of course, evolutionary materialists seem to do in several ways]. I suggest here on for more on that. s --> So, DW has begged major questions, and has misrepresented what he objects to in the teeth of correction. Grade F, again.]
Chemical reactions and chemical evolution are both directly observed realities.
[t --> AND of course is a logical operator that joins equals that must both be true for the composite to be true. That is why the joining of a first part which is trivial to a second part that is blatantly false is an example of a big lie tactic. DW knows or -- to be ignorant on this is just as culpable -- should know that we have precisely not observed the origin by spontaneous chemical reactions under reasonable initial conditions of the components and configurations of a gated, encapsulated, metabolic automaton with a von Neumann self replicator using digitally coded algorithmic control tapes. Not even close. So, he is falsely and willfully presenting a speculation that has no observational warrant to stand in the place of what we do observe, the intelligent cause of codes, algorithms, FSCO/I and the integration of functioning systems using same. The need to resort to blatant misrepresentation of facts speaks eloquently of failure to have present observations of blind chance and mechanical necessity in a warm pond or the equivalent, forming up life or its major and crucial information-rich components. But also, we know that codes, algorithms and the like represent purpose and knowledge, i.e. they are characteristic observations of intelligence in action. But DW is desperate to obscure this. Grade F, again.]
Non-living intelligent designers have never been observed to scientific knowledge, and are not even known to be possible.
[u --> Building on the strawman. Grade F again.]
Therefore, there is virtually infinitely more evidence for my hypothesis than K’s. Therefore, it is clearly the best current general explanation for the OOL.
[v --> Willfully false conclusion. Overall grade, F, for cause.]
If those who so proudly boasted of their cornering the market on science and rationality have to resort to such tactics once confronted to address the key issues, what does that tell us about the underlying merit of their case? Volumes, and none of it good. The above commented excerpt also shows the pattern of insistent repetition of falsified claims, and the strawman caricature game that serve as enablers of the utterly uncivil. It will be no surprise at all to see the DW fallacies touted elsewhere as proofs positive of the overthrow of design thinkers, and their dishonesty or stupidity or insanity etc. That is why the sort of drumbeat repetition of the willfully false on this matter by evo mat advocates, is not at all innocent or excusable. KFkairosfocus
August 30, 2012
August
08
Aug
30
30
2012
04:34 AM
4
04
34
AM
PDT
OK Allan, just because there is a lionk-up between the tRNA and the codon (in the ribosome) does NOT mean there is a physio-chemical connection between the codon and the amino acid. As UD said the relationship is arbitrary which means there isn't any law that determines it- the choice of amino acid to specific codon. _______ Joe, AM needs to observe the discussion and diagram here at Wiki, and to understand that the CCA-coupler that carries the AA is a standard bloc. It is the loading enzyme that "recognises" a given tRNA and loads it with the correct AA. Indeed, this has been used to load tRNA's with artificial AA's and used onwards to create new, unnatural proteins. In short the connexion USES the standard chemistry, but the AA for a given tRNA with a given anticodon is assigned on an INFORMATIONAL basis not a chemical basis. The transcription-translation protein synthesis process USES chemistry but is not driven by blindly forced chemical interactions. Instead, the crucial step is plainly informational, i.e a given codon tells the tRNA entering the Ribosome and coupling to it, to couple the loaded AA as the next in the chain. Then the Ribosome ratchets the mRNA tape forward one slot, and the next tRNA is used to add, until the stop codon comes along. This is an informationally controlled, algorithmic, coded, step by step process. KFJoe
August 30, 2012
August
08
Aug
30
30
2012
04:20 AM
4
04
20
AM
PDT
I see that Allan Miller has gone off of the edge- earth to Allan- there isn't any physio-chemical connection between the nucleotide (codon) and the amino acid it represents- the codon does not become the amino acid via some chemical reaction. Yes there are chemical connections/ bonds between the nucleotides. Yes there are chemical connections/ bonds between the tRNA and its amino acid. Yes there are chemical connections/ bonds between the amino acids in the polypeptide. And all of that is irrelevant to what I said.Joe
August 29, 2012
August
08
Aug
29
29
2012
05:00 PM
5
05
00
PM
PDT
Kairosfocus- Seeing that you and CR have an on-topic dialog going could you please start an open thread in which posters here can use to correct the misconceptions of our opponents, where-ever they roam? _______ Actually, Joe, this is one time where the evasiveness, strawman distortions and personalities played by evolutionary materialists and fellow travellers are very much a part of the issue. In response to a syllabus of eighteen specific questions, they are repeatedly unable to answer straight or cogently. That it self reveals a lot about their once proud -- now tattered -- boasts to have cornered the market on rationality and science. On a personal note, I am glad to see the improvements in tone post the judgements delivered. KFJoe
August 29, 2012
August
08
Aug
29
29
2012
03:42 PM
3
03
42
PM
PDT
Joe: Not only so, but the CCA- coupler is a universal joint in the tRNA. It is the "loading enzyme" -- note the Chicken-egg loop -- that determines which AA goes with which tRNA. In short, this is an INFORMATIONAL correspondence, not a chemical one. KFkairosfocus
August 29, 2012
August
08
Aug
29
29
2012
03:21 PM
3
03
21
PM
PDT
CR: I am very familiar with the common over-reading of the theory-ladenness of observations. The key way out of it is the multiplication of lines of investigation that are sufficiently distinct that massive coincidence is ever more unlikely. Yes, the reason why some School men refused to look through Galileo's scope is much the same, they had reasons and rhetoric to distrust it. But there was excellent reason to see that the telescope worked well. And the relevant explanatory theories are not controversial. We can for instance see light bent in a prism, and a lens is a stack of prisms. We can also see that lenses are accurate. Then there was the use of telescopes in terrestrial observations, the first commercial application. The best explanation of what we see in the lens is that it is a genuine magnification. (For instance I well recall when in a 4th form physics lab, we built a compound microscope and looked at a then J'can $ 0.50 note through it. Consistent with what we saw by eye, and giving more details.) And so forth. No real grounds for radical relativism and we-live-in Plato's cave games there. KFkairosfocus
August 29, 2012
August
08
Aug
29
29
2012
03:16 PM
3
03
16
PM
PDT
Thank you, Kairosfocus- now they have all just degenerated into a mob, babbling incoherently. And it is their own misconceptions that they have to blame for that. Life is good...Joe
August 29, 2012
August
08
Aug
29
29
2012
03:01 PM
3
03
01
PM
PDT
Joe: the desperate strawman twists and turns at TSZ serve only to underscore just how much the bite does not measure up to the bark. What I laid out is the Darwin's pond challenge that Evo mat advocates have by implication set for themselves, and the big gap they face is where do functionally specific complex organisation and associated information come from once their favourite out, natural selection, is off the table at OOL -- you don't have reproduction yet so no differential reproductive success. Naked lucky noise miracles won't play in Peoria, I suppose. As for in Kingston or Bridgetown or St Johns . . . KFkairosfocus
August 29, 2012
August
08
Aug
29
29
2012
01:45 PM
1
01
45
PM
PDT
CR: Quick note. The old fashioned view of induction as in effect especially generalisation from particular cases to a general rule is passe. An inductive argument has been understood more recently as one where in effect empirical evidence provides support (hopefully substantial) but not demonstrative proof, of a conclusion; this includes the sort of generalisation with which we are familiar but is broader, because it was realised that there was an organic family resemblance. That is why IBCE is a form of inductive reasoning. Go look up the exchanges with Maus on this, and while you are at it, go look at SEP:
An inductive logic is a system of evidential support that extends deductive logic to less-than-certain inferences. For valid deductive arguments the premises logically entail the conclusion, where the entailment means that the truth of the premises provides a guarantee of the truth of the conclusion. Similarly, in a good inductive argument the premises should provide some degree of support for the conclusion, where such support means that the truth of the premises indicates with some degree of strength that the conclusion is true. Presumably, if the logic of good inductive arguments is to be of any real value, the measure of support it articulates should meet the following condition: Criterion of Adequacy (CoA): As evidence accumulates, the degree to which the collection of true evidence statements comes to support a hypothesis, as measured by the logic, should tend to indicate that false hypotheses are probably false and that true hypotheses are probably true.
Hope that helps as a quickie. Gotta go now KFkairosfocus
August 29, 2012
August
08
Aug
29
29
2012
01:40 PM
1
01
40
PM
PDT
1 2 3 4

Leave a Reply