Uncommon Descent Serving The Intelligent Design Community
Author

kairosfocus

ID Foundations, 5: Functionally Specific, Complex Organization and associated Information as empirically testable (and recognised) signs of design

(ID Foundations series so far: 1, 2, 3, 4 )

In a current UD discussion thread, frequent commenter MarkF (who supports evolutionary materialism) has made the following general objection to the inference to design:

. . . my claim is not that ID is false. Just that is not falsifiable. On the other hand claims about specific designer(s)with known powers and motives are falsifiable and, in all cases that I know of, clearly false.

The objection is actually trivially correctable.

Not least,  as we — including MF — are designers who routinely leave  behind empirically testable, reliable signs of design, such as posts on UD blog in English that (thanks to the infinite monkeys “theorem” as discussed in post no 4 in this series)  are well beyond the credible reach of undirected chance and necessity on the gamut of the observed cosmos. For instance, the excerpt just above uses 210 7-bit ASCII characters, which specifies a configuration space of 128^210 ~ 3.26 * 10^442 possible bit combinations. The whole observable universe, acting as a search engine working at the fastest possible physical rate [10^45 states/s, for 10^80 atoms, for 10^25 s: 10^150 possible states] , could not scan as much as 1 in 10^ 290th of that.

That is, any conceivable chance and necessity based search on the scope of our cosmos would very comfortably round down to a practical zero. But MF as an intelligent and designing commenter, probably tossed the above sentences off in a minute or two.

That is why such functionally specific, complex organisation and associated information [FSCO/I] are credible, empirically testable and reliable signs of intelligent design.

But don’t take my word for it.

A second UD commenter, Acipenser (= s[t]urgeon), recently challenged BA 77 and this poster as follows, in the signs of scientism thread:

195: What does the Glasgow Coma scale measure? The mind or the body?

206: kairosfocus: What does the Glasgow Coma scale measure? Mind or Body?

This is a scale of measuring consciousness that as the Wiki page notes, is “used by first aid, EMS, and doctors as being applicable to all acute medical and trauma patients.” That is, the scale tests for consciousness. And –as the verbal responsiveness test especially shows — the test is an example of where the inference to design is routinely used in an applied science context, often in literal life or death situations:

Fig. A: EMT’s at work. Such paraprofessional medical personnel routinely test for the consciousness of patients by rating their capacities on eye, verbal and motor responsiveness, using the Glasgow Coma Scale, which is based on an inference to design as a characteristic behaviour of conscious intelligences. (Source: Wiki.)

In short, the Glasgow Coma Scale [GCS] is actually a case in point of the reliability and scientific credibility of the inference to design; even in life and death situations.

Why do I say that?

Read More ›
search_window

ID Foundations, 4: Specified Complexity and linked Functional Organisation as signs of design

(NB: ID Foundations Series, so far: 1, 2, 3.)

In a recent comment on the ID Foundations 3 discussion thread, occasional UD commenter LastYearOn [henceforth LYO], remarked:

Behe is implicitly assuming that natural processes cannot explain human technology. However natural processes do explain technology, by explaining humans. We may think of computers as somehow distinct from objects that formed from the direct result of natural process. And in important ways they are. But that doesn’t mean that they aren’t ultimately explainable naturally. Behe’s argument is therefore circular.

Think of it this way. In certain ways nature is prone to organization. From cells to multicellular organisms to humans. Computers are just the latest example.

In essence, LYO is arguing — yea, even, confidently  assuming — that since nature has the capacity to spontaneously generate designers through evolutionary means, then technology and signs of design reduce to blind forces and circumstances of chance plus necessity in action. Thus, when we behold, say a ribosome in action —

Fig. A: The Ribosome in action in protein translation, assembling (and then completing) a protein step by step [= algorithmically] based on the sequence of three-letter codons in the  mRNA tape and using tRNA’s as amino acid “taxis” and position-arm tool-tips, implementing a key part of a von Neumann-type self replicator . (Courtesy, Wikipedia.)

___________________

. . . we should not think, digitally coded, step by step algorithmic process, so on signs of design, design. Instead, LYO and other evolutionary materialists argue that we should think: here is an example of the power of undirected chance plus necessity to spontaneously create a complex functional entity that is the basis for designers as we observe them, humans.

So, on the evolutionary materialistic view, the triad of explanatory causes, necessity, chance, art, collapses into the first two. Thus, signs of design such as specified complexity and associated highly specific functional organisation — including that functional organisation that happens to be irreducibly complex — reduce to being evidences of the power of chance and necessity in action!

Voila, design is finished as an explanation of origins!

But is this assumption or assertion credible?

No . . . it fallaciously begs the question of the underlying power of chance plus necessity, thus setting up the significance of the issue of specified complexity as an empirically reliable sign of design. No great surprise there. But, the issue also opens the door to a foundational understanding of the other hotly contested core ID concept, specified complexity.

Read More ›

300px-AmineTreating

ID Foundations, 3: Irreducible Complexity as concept, as fact, as [macro-]evolution obstacle, and as a sign of design

[ID Found’ns Series, cf. also Bartlett here]

Irreducible complexity is probably the most violently objected to foundation stone of Intelligent Design theory. So, let us first of all define it by slightly modifying Dr Michael Behe’s original statement in his 1996 Darwin’s Black Box [DBB]:

What type of biological system could not be formed by “numerous successive, slight modifications?” Well, for starters, a system that is irreducibly complex. By irreducibly complex I mean a single system composed of several well-matched interacting parts that contribute to the basic function, wherein the removal of any one of the [core] parts causes the system to effectively cease functioning. [DBB, p. 39, emphases and parenthesis added. Cf. expository remarks in comment 15 below.]

Behe proposed this definition in response to the following challenge by Darwin in Origin of Species:

If it could be demonstrated that any complex organ existed, which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down. But I can find out no such case . . . . We should be extremely cautious in concluding that an organ could not have been formed by transitional gradations of some kind. [Origin, 6th edn, 1872, Ch VI: “Difficulties of the Theory.”]

In fact, there is a bit of question-begging by deck-stacking in Darwin’s statement: we are dealing with empirical matters, and one does not have a right to impose in effect outright logical/physical impossibility — “could not possibly have been formed” — as a criterion of test.

If, one is making a positive scientific assertion that complex organs exist and were credibly formed by gradualistic, undirected change through chance mutations and differential reproductive success through natural selection and similar mechanisms, one has a duty to provide decisive positive evidence of that capacity. Behe’s onward claim is then quite relevant: for dozens of key cases, no credible macro-evolutionary pathway (especially no detailed biochemical and genetic pathway) has been empirically demonstrated and published in the relevant professional literature. That was true in 1996, and despite several attempts to dismiss key cases such as the bacterial flagellum [which is illustrated at the top of this blog page] or the relevant part of the blood clotting cascade [hint: picking the part of the cascade — that before the “fork” that Behe did not address as the IC core is a strawman fallacy], it arguably still remains to today.

Now, we can immediately lay the issue of the fact of irreducible complexity as a real-world phenomenon to rest.

For, a situation where core, well-matched, and co-ordinated parts of a system are each necessary for and jointly sufficient to effect the relevant function is a commonplace fact of life. One that is familiar from all manner of engineered systems; such as, the classic double-acting steam engine:

Fig. A: A double-acting steam engine (Courtesy Wikipedia)

Such a steam engine is made up of rather commonly available components: cylinders, tubes, rods, pipes, crankshafts, disks, fasteners, pins, wheels, drive-belts, valves etc. But, because a core set of well-matched parts has to be carefully organised according to a complex “wiring diagram,” the specific function of the double-acting  steam engine is not explained by the mere existence of the parts.

Nor, can simply choosing and re-arranging similar parts from say a bicycle or an old-fashioned car or the like create a viable steam engine.  Specific mutually matching parts [matched to thousandths of an inch usually], in a very specific pattern of organisation, made of specific materials, have to be in place, and they have to be integrated into the right context [e.g. a boiler or other source providing steam at the right temperature and pressure], for it to work.

If one core part breaks down or is removed — e.g. piston, cylinder, valve, crank shaft, etc., core function obviously ceases.

Irreducible complexity is not only a concept but a fact.

But, why is it said that irreducible complexity is a barrier to Darwinian-style [macro-]evolution and a credible sign of design in biological systems?

Read More ›

ds_cyb_mind_model
The Eng Derek Smith Cybernetic Model

ID Foundations, 2: Counterflow, open systems, FSCO/I and self-moved agents in action

In two recent UD threads, frequent commenter AI Guy, an Artificial Intelligence researcher, has thrown down the gauntlet:

Winds of Change, 76:

By “counterflow” I assume you mean contra-causal effects, and so by “agency” it appears you mean libertarian free will. That’s fine and dandy, but it is not an assertion that can be empirically tested, at least at the present time.

If you meant something else by these terms please tell me, along with some suggestion as to how we might decide if such a thing exists or not. [Emphases added]

ID Does Not Posit Supernatural Causes, 35:

Finally there is an ID proponent willing to admit that ID cannot assume libertarian free will and still claim status as an empirically-based endeavor. [Emphasis added] This is real progress!

Now for the rest of the problem: ID still claims that “intelligent agents” leave tell-tale signs (viz FSCI), even if these signs are produced by fundamentally (ontologically) the same sorts of causes at work in all phenomena . . . . since ID no longer defines “intelligent agency” as that which is fundamentally distinct from chance + necessity, how does it define it? It can’t simply use the functional definition of that which produces FSCI, because that would obviously render ID’s hypothesis (that the FSCI in living things was created by an intelligent agent) completely tautological. [Emphases original. NB: ID blogger Barry Arrington, had simply said: “I am going to make a bold assumption for the sake of argument. Let us assume for the sake of argument that intelligent agents do NOT have free will . . . ” (Emphases added.)]

This challenge brings to a sharp focus the foundational  issue of counter-flow, constructive work by designing, self-moved initiating, purposing agents as a key concept and explanatory term in the theory of intelligent design. For instance, we may see from leading ID researcher, William Dembski’s No Free Lunch:

. . .[From commonplace experience and observation, we may see that:]  (1) A designer conceives a purpose. (2) To accomplish that purpose, the designer forms a plan. (3) To execute the plan, the designer specifies building materials and assembly instructions. (4) Finally, the designer or some surrogate applies the assembly instructions to the building materials. (No Free Lunch, p. xi. HT: ENV.) [Emphases and explanatory parenthesis added.]

This is of course, directly based on and aptly summarises our routine experience and observation of designers in action.

For, designers routinely purpose, plan and carry out constructive work directly or though surrogates (which may be other agents, or automated, programmed machines). Such work often produces functionally specific, complex  organisation and associated information [FSCO/I;  a new descriptive abbreviation that brings the organised components and the link to FSCI (as was highlighted by Wicken in 1979)  into central focus].

ID thinkers argue, in turn, that that FSCO/I in turn is an empirically reliable sign pointing to intentionally and intelligently directed configuration — i.e. design — as signified cause.

And, many such thinkers further argue that:

if, P: one is not sufficiently free in thought and action to sometimes actually and truly decide by reason and responsibility (as opposed to: simply playing out the subtle programming of blind chance and necessity mediated through nature, nurture and manipulative indoctrination)

then, Q: the whole project of rational investigation of our world based on observed evidence and reason — i.e. science (including AI) — collapses in self-referential absurdity.

But, we now need to show that . . .

Read More ›

The announced “death” of the Fine-tuning Cosmological Argument seems to have been over-stated

In recent days, there has been a considerable stir in the blogosphere, as  prof Don Page of the University of Alberta has issued two papers and a slide show that purport to show the death of — or at least significant evidence against — the fine-tuning cosmological argument. (Cf here and here at UD. [NB: A 101-level summary and context for the fine-tuning argument, with onward links is here. A fairly impressive compendium of articles, links and videos on fine-tuning is here. Video summary is here, from that compendium. (Privileged Planet at Amazon)])

However, an examination of the shorter of the two papers by the professor, will show that he has apparently overlooked a logical subtlety. He has in fact only argued that there may be a second, fine-tuned range of possible values for the cosmological constant. This may be seen from p. 5 of that paper:

. . . with the cosmological constant being the negative of the value for the MUM that makes it have present age

t0 = H0^- 1 = 10^8years/alpha, the total lifetime of the anti-MUM model is 2.44t0 = 33:4 Gyr.

Values of [L] more negative than this would presumably reduce the amount of life per baryon that has condensed into galaxies more than the increase in the fraction of baryons that condense into galaxies in the first place, so I would suspect that the value of the cosmological constant that maximizes the fraction of baryons becoming life is between zero and – LO ~ 3.5 * 10^- 122, with a somewhat lower magnitude than the observed value but with the opposite sign. [Emphases added, and substitutes made for symbols that give trouble in browsers.]

Plainly, though, if one is proposing a range of values that is constrained to within several parts in 10^-122, one is discussing a fairly fine-tuned figure.

Just, you are arguing for a second possible locus of fine-tuning on the other side of zero.

(And, that would still be so even if the new range were 0 to minus several parts in 10^-2 [a few percent], not minus several parts in 10^-122 [a few percent of a trillionth of a trillionth of . . . ten times over]. Several parts in a trillion is rather roughly comparable to the ratio of the size of a bacterium to twice the length of Florida or the lengths of  Cuba or Honshu in Japan or Cape York in Australia or Great Britain or Italy )

Read More ›

ID Foundations: The design inference, warrant and “the” scientific method

It has been said that Intelligent design (ID) is the view that it is possible to infer from empirical evidence that “certain features of the universe and of living things are best explained by an intelligent cause, not an undirected process such as natural selection” . . .”  This puts the design inference at the heart of intelligent design theory, and raises the questions of its degree of warrant and relationship to the — insofar as a  “the” is possible — scientific method.

Leading Intelligent Design researcher, William Dembski has summarised the actual process of  inference:

“Whenever explaining an event, we must choose from three competing modes of explanation. These are regularity [i.e., natural law], chance, and design.” When attempting to explain something, “regularities are always the first line of defense. If we can explain by means of a regularity, chance and design are automatically precluded. Similarly, chance is always the second line of defense. If we can’t explain by means of a regularity, but we can explain by means of chance, then design is automatically precluded. There is thus an order of priority to explanation. Within this order regularity has top priority, chance second, and design last”  . . . the Explanatory Filter “formalizes what we have been doing right along when we recognize intelligent agents.” [Cf. Peter Williams’ article, The Design Inference from Specified Complexity Defended by Scholars Outside the Intelligent Design Movement, A Critical Review, here. We should in particular note his observation: “Independent agreement among a diverse range of scholars with different worldviews as to the utility of CSI adds warrant to the premise that CSI is indeed a sound criterion of design detection. And since the question of whether the design hypothesis is true is more important than the question of whether it is scientific, such warrant therefore focuses attention on the disputed question of whether sufficient empirical evidence of CSI within nature exists to justify the design hypothesis.”]

The design inference process as described can be represented in a flow chart:

explan_filter

Fig. A: The Explanatory filter and the inference to design, as applied to various  aspects of an object, process or phenomenon, and in the context of the generic scientific method. (So, we first envision nature acting by low contingency law-like mechanical necessity such as with F = m*a . . . think of a heavy unsupported object near the earth’s surface falling with initial acceleration g = 9.8 N/kg or so. That is the first default. Similarly, we may see high contingency knocking out the first default — under similar starting conditions, there is a broad range of possible outcomes. If things are highly contingent in this sense, the second default is: CHANCE. That is only knocked out if an aspect of an object, situation, or process etc. exhibits, simultaneously: (i) high contingency, (ii) tight specificity of configuration relative to possible configurations of the same bits and pieces, (iii)  high complexity or information carrying capacity, usually beyond 500 – 1,000 bits. In such a case, we have good reason to infer that the aspect of the object, process, phenomenon etc. reflects design or . . . following the terms used by Plato 2350 years ago in The Laws, Bk X . . .  the ART-ificial, or contrivance, rather than nature acting freely through undirected blind chance and/or mechanical necessity. [NB: This trichotomy across necessity and/or chance and/or the ART-ificial, is so well established empirically that it needs little defense. Those who wish to suggest no, we don’t know there may be a fourth possibility, are the ones who first need to show us such before they are to be taken seriously. Where, too, it is obvious that the distinction between “nature” (= “chance and/or necessity”) and the ART-ificial is a reasonable and empirically grounded distinction, just look on a list of ingredients and nutrients on a food package label. The loaded rhetorical tactic of suggesting, implying or accusing that design theory really only puts up a religiously motivated way to inject the supernatural as the real alternative to the natural, fails. (Cf. the UD correctives 16 – 20 here on. as well as 1 – 8 here on.) And no, when say the averaging out of random molecular collisions with a wall gives rise to a steady average, that is a case of empirically  reliable lawlike regularity emerging from a strong characteristic of such a process, when sufficient numbers are involved, due to the statistics of very large numbers  . . . it is easy to have 10^20 molecules or more . . . at work there is a relatively low fluctuation, unlike what we see with particles undergoing Brownian motion. That is in effect low contingency mechanical necessity in the sense we are interested in, in action. So, for instance we may derive for ideal gas particles, the relationship P*V = n*R*T as a reliable law.] )

Explaining (and discussing) in steps:

1 –> As was noted in background remarks 1 and 2, we commonly observe signs and symbols, and infer on best explanation to underlying causes or meanings. In some cases, we assign causes to (a) natural regularities tracing to mechanical necessity [i.e. “law of nature”], in others to (b) chance, and in yet others we routinely assign cause to (c) intentionally and intelligently, purposefully directed configuration, or design.  Or, in leading ID researcher William Dembski’s words, (c) may be further defined in a way that shows what intentional and intelligent, purposeful agents do, and why it results in functional, specified complex organisation and associated information:

. . . (1) A designer conceives a purpose. (2) To accomplish that purpose, the designer forms a plan. (3) To execute the plan, the designer specifies building materials and assembly instructions. (4) Finally, the designer or some surrogate applies the assembly instructions to the building materials. (No Free Lunch, p. xi. HT: ENV.)

Read More ›

osc_rsc_fsc

Background Note: On Orderly, Random and Functional Sequence Complexity

In 2005, David L Abel and Jack T Trevors published a key article on order, randomness and functionality, that sets a further context for appreciating the warrant for the design inference. The publication data and title for the peer-reviewed article are as follows: Theor Biol Med Model. 2005; 2: 29. Published online 2005 August 11. doi: 10.1186/1742-4682-2-29. PMCID: PMC1208958 Copyright © 2005 Abel and Trevors; licensee BioMed Central Ltd. Three subsets of sequence complexity and their relevance to biopolymeric information A key figure (NB: in the public domain)  in the article was their Fig. 4: Figure 4: Superimposition of Functional Sequence Complexity onto Figure 2. The Y1 axis plane plots the decreasing degree of algorithmic compressibility as complexity increases from Read More ›

Background Note: On signs, symbols and their significance

As a preliminary step to a discussion [DV, to follow] of the significance of and warrant for the design inference, let us now symbolise how we interact with and draw inferences about signs and symbols (generally following Peirce et al [Added, Feb 28: including P’s thought on warrant by inference to best explanation i.e. abductive reasoning; where also warrant can be understood on Toulmin, Plantinga, Gettier and others (cf broader discussion here )]): __________________ Signs: I observe one or more signs [in a pattern], and infer the signified object, on a warrant: I: [si] –> O, on W a –> Here, as I will use “sign” [as opposed to “symbol”],  the connexion is a more or less causal or natural Read More ›

They said it: NCSE endorses the “design is re-labelled creationism” slander

In the short term, a smear campaign can be very successful, and will poison the atmosphere, perhaps even poisoning the general public’s perception of your opponents. Usually, it works by using what may be called for convenience the trifecta fallacy, unfortunately — and as we shall shortly see — a now habitual pattern of all too many evolutionary materialism advocates when they deal with Intelligent Design. Specifically:

i: use a smelly red herring distractor to pull attention away from the real issues and arguments

ii: lead it away to a strawman caricature of the issues and arguments of the opponent

iii: soak it in inflammatory innuendos, guilt by invidious association or outright demonising attacks to the man (ad hominems) and ignite through snide or incendiary rhetoric.

The typical result of such an uncivil, disrespectful rhetorical tactic when used on a naive or trusting public is that it distracts attention, clouds, confuses, polarises and poisons the atmosphere for discussion. Especially when false accusations are used, it can seriously damage reputations and careers. So, the trifecta is at minimum a violation of duties of care and respect. At worst, it is a cynically calculated propagandistic deception that through clouding the atmosphere with a poisonous, polarising cloud, divides the public and points their attention to an imaginary threat elsewhere, so that an agenda that plainly cannot stand on its own merits can gain power in the community.

But what happens when the smear begins to unravel as more and more people begin to understand that you have failed to be fair or truthful, in the face of abundant evidence and opportunity to the contrary?

Let us see, by examining the NCSE-hosted (thus, again, endorsed) page for the ironically named New Mexico Coalition for Excellence in Science and Math Education. Excerpting:

Science deals with natural explanations for natural phenomena. Creationism or intelligent design, if allowed, would change this to promote supernatural explanations for natural phenomena — a contradiction in terms with regard to science. Intelligent design is also sterile as far as science is concerned. To be considered as real science, it must be able to explain and predict natural phenomena. Intelligent design proponents simply say that life is too complex to have arisen naturally. Therefore, an intelligent being (God) must have directly intervened whenever it chose to cause the diversity of the species. This explains everything and it explains nothing; it is not science.

The creationist groups attempt to masquerade their ideas as science simply by calling the concept “intelligent design theory”. No testable hypotheses or any form of scientific research has been presented to support their attempts to insert religion into science. Furthermore, it is suspected that the aim of these religiously motivated people is to redefine the meaning of science; if they were successful, science would become useless as a method for learning about the natural world. CESE decries the very usage of science terminology where there is no sound use of science. CESE also decries any political attempt to discredit the Theory of Evolution. Creationists present false statements concerning the validity of observed evidence for evolution such as: “there is no fossil evidence for evolution,” “it is impossible to obtain higher complexity systems from lower complexity systems,” etc. They call into question the motives and beliefs of scientists with claims such as, “if you believe in evolution, you are an atheist,” etc. They have even invented an imaginary scientific “controversy” to argue their agenda . . .

This needs to be exposed and corrected in steps, and it is worth the while to immediately pause and look at the Dissent from Darwin list to see that: yes, Virginia, there is a real controversy on scientific matters tied to Darwinism.  Also, let us list links to the series so far: background, and “They said it . . . ” 1, 2, 3.

So now, correcting in steps: Read More ›

They said it: NCSE endorses the teaching of evolution as “fact”

We may add the NCSE-endorsed declaration of the North Carolina Math and Science Education Network to the list of declarations that “evolution” is to be taught as “fact.” (I freely say, endorsed, as NCSE hosts the declaration, and does so without disclaimer.) Let us excerpt: The primary goal of science teaching is to produce a scientifically competent citizenry, one which knows how to distinguish between theories substantiated with sound evidence and theories which cannot be substantiated through evidence. Evolution is identified as being the central unifying role in the biological sciences. If we teach our students that the theory of evolution is not accepted fact, we also put into question scientific advancement in chemistry, physics, astronomy, and all other related Read More ›

They said it: “Evolution is a Fact!”

The opening of  the current version of the Wikipedia article, “Evolution as theory and fact,” (with links and references removed) reads: The statement “evolution is both a theory and a fact” is often seen in biological literature. Evolution is a “theory” in the scientific sense of the term “theory”; it is an established scientific model that explains observations and makes predictions through mechanisms such as natural selection. When scientists say “evolution is a fact”, they are using one of two meanings of the word “fact”. One meaning is empirical: evolution can be observed through changes in allele frequencies or traits of a population over successive generations. Another way “fact” is used is to refer to a certain kind of theory, Read More ›

They said it: NSTA’s radical redefinition of Science

We have all heard of the NCSE, but the National Science Teachers Association [of the US], NSTA, has proposed a new definition of the nature of science, in a declaration signed off by its Board of Directors, as long ago as July, 2000.  Excerpting: All those involved with science teaching and learning should have a common, accurate view of the nature of science. Science is characterized by the systematic gathering of information through various forms of direct and indirect observations and the testing of this information by methods including, but not limited to, experimentation. The principal product of science is knowledge in the form of naturalistic concepts and the laws and theories related to those concepts . . . . Read More ›

Naturalism is a priori evolutionary materialism, so it both begs the question and self-refutes

The thesis expressed in the title of this “opening bat” post is plainly controversial, and doubtless will be hotly contested and/or pointedly ignored. However, when all is said and done, it will be quite evident that it has the merit that it just happens to be both true and well-warranted. So, let us begin. Noted Harvard evolutionary biologist Richard Lewontin inadvertently lets the cat out of the bag in his well-known January 1997 New York Review of Books article, “Billions and Billions of Demons”: . . . to put a correct view of the universe into people’s heads we must first get an incorrect view out . . .   the problem is to get them to reject irrational and supernatural Read More ›