Uncommon Descent Serving The Intelligent Design Community

ID Foundations, 17a: Footnotes on Conservation of Information, search across a space of possibilities, Active Information, Universal Plausibility/ Probability Bounds, guided search, drifting/ growing target zones/ islands of function, Kolmogorov complexity, etc.

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

(previous, here)

There has been a recent flurry of web commentary on design theory concepts linked to the concept of functionally specific, complex organisation and/or associated information (FSCO/I) introduced across the 1970’s into the 1980’s  by Orgel and Wicken et al. (As is documented here.)

This flurry seems to be connected to the announcement of an upcoming book by Meyer — it looks like attempts are being made to dismiss it before it comes out, through what has recently been tagged, “noviews.” (Criticising, usually harshly, what one has not read, by way of a substitute for a genuine book review.)

It will help to focus for a moment on the just linked ENV article, in which ID thinker William Dembski responds to such critics, in part:

[L]et me respond, making clear why criticisms by Felsenstein, Shallit, et al. don’t hold water.

There are two ways to see this. One would be for me to review my work on complex specified information (CSI), show why the concept is in fact coherent despite the criticisms by Felsenstein and others, indicate how this concept has since been strengthened by being formulated as a precise information measure, argue yet again why it is a reliable indicator of intelligence, show why natural selection faces certain probabilistic hurdles that impose serious limits on its creative potential for actual biological systems (e.g., protein folds, as in the research of Douglas Axe [Link added]), justify the probability bounds and the Fisherian model of statistical rationality that I use for design inferences, show how CSI as a criterion for detecting design is conceptually equivalent to information in the dual senses of Shannon and Kolmogorov, and finally characterize conservation of information within a standard information-theoretic framework. Much of this I have done in a paper titled “Specification: The Pattern That Signifies Intelligence” (2005) [link added] and in the final chapters of The Design of Life (2008).

But let’s leave aside this direct response to Felsenstein (to which neither he nor Shallit ever replied). The fact is that conservation of information has since been reconceptualized and significantly expanded in its scope and power through my subsequent joint work with Baylor engineer Robert Marks. Conservation of information, in the form that Felsenstein is still dealing with, is taken from my 2002 book No Free Lunch . . . .

[W]hat is the difference between the earlier work on conservation of information and the later? The earlier work on conservation of information focused on particular events that matched particular patterns (specifications) and that could be assigned probabilities below certain cutoffs. Conservation of information in this sense was logically equivalent to the design detection apparatus that I had first laid out in my book The Design Inference(Cambridge, 1998).

In the newer approach to conservation of information, the focus is not on drawing design inferences but on understanding search in general and how information facilitates successful search. The focus is therefore not so much on individual probabilities as on probability distributions and how they change as searches incorporate information. My universal probability bound of 1 in 10^150 (a perennial sticking point for Shallit and Felsenstein) therefore becomes irrelevant in the new form of conservation of information whereas in the earlier it was essential because there a certain probability threshold had to be attained before conservation of information could be said to apply. The new form is more powerful and conceptually elegant. Rather than lead to a design inference, it shows that accounting for the information required for successful search leads to a regress that only intensifies as one backtracks. It therefore suggests an ultimate source of information, which it can reasonably be argued is a designer. I explain all this in a nontechnical way in an article I posted at ENV a few months back titled “Conservation of Information Made Simple” (go here).

A lot of this pivots on conservation of information and the idea of search in a space of possibilities, so let us also excerpt the second ENV article as well:

Conservation of information is a term with a short history. Biologist Peter Medawar used it in the 1980s to refer to mathematical and computational systems that are limited to producing logical consequences from a given set of axioms or starting points, and thus can create no novel information (everything in the consequences is already implicit in the starting points). His use of the term is the first that I know, though the idea he captured with it is much older. Note that he called it the “Law of Conservation of Information” (see his The Limits of Science, 1984).

Computer scientist Tom English, in a 1996 paper, also used the term conservation of information, though synonymously with the then recently proved results by Wolpert and Macready about No Free Lunch (NFL). In English’s version of NFL, “the information an optimizer gains about unobserved values is ultimately due to its prior information of value distributions.” As with Medawar’s form of conservation of information, information for English is not created from scratch but rather redistributed from existing sources.

Conservation of information, as the idea is being developed and gaining currency in the intelligent design community, is principally the work of Bob Marks and myself, along with several of Bob’s students at Baylor (see the publications page at www.evoinfo.org). Conservation of information, as we use the term, applies to search. Now search may seem like a fairly restricted topic. Unlike conservation of energy, which applies at all scales and dimensions of the universe, conservation of information, in focusing on search, may seem to have only limited physical significance. But in fact, conservation of information is deeply embedded in the fabric of nature, and the term does not misrepresent its own importance . . . .

Humans search for keys, and humans search for uncharted lands. But, as it turns out, nature is also quite capable of search. Go to Google and search on the term “evolutionary search,” and you’ll get quite a few hits. Evolution, according to some theoretical biologists, such as Stuart Kauffman, may properly be conceived as a search (see his book Investigations). Kauffman is not an ID guy, so there’s no human or human-like intelligence behind evolutionary search as far as he’s concerned. Nonetheless, for Kauffman, nature, in powering the evolutionary process, is engaged in a search through biological configuration space, searching for and finding ever-increasing orders of biological complexity and diversity . . . .

Evolutionary search is not confined to biology but also takes place inside computers. The field of evolutionary computing (which includes genetic algorithms) falls broadly under that area of mathematics known as operations research, whose principal focus is mathematical optimization. Mathematical optimization is about finding solutions to problems where the solutions admit varying and measurable degrees of goodness (optimality). Evolutionary computing fits this mold, seeking items in a search space that achieve a certain level of fitness. These are the optimal solutions. (By the way, the irony of doing a Google “search” on the target phrase “evolutionary search,” described in the previous paragraph, did not escape me. Google’s entire business is predicated on performing optimal searches, where optimality is gauged in terms of the link structure of the web. We live in an age of search!)

If the possibilities connected with search now seem greater to you than they have in the past, extending beyond humans to computers and biology in general, they may still seem limited in that physics appears to know nothing of search. But is this true? The physical world is life-permitting — its structure and laws allow (though they are far from necessitating) the existence of not just cellular life but also intelligent multicellular life. For the physical world to be life-permitting in this way, its laws and fundamental constants need to be configured in very precise ways. Moreover, it seems far from mandatory that those laws and constants had to take the precise form that they do. The universe itself, therefore, can be viewed as the solution to the problem of making life possible. But problem solving itself is a form of search, namely, finding the solution (among a range of candidates) to the problem . . . .

The fine-tuning of nature’s laws and constants that permits life to exist at all is not like this. It is a remarkable pattern and may properly be regarded as the solution to a search problem as well as a fundamental feature of nature, or what philosophers would call a natural kind, and not merely a human construct. Whether an intelligence is responsible for the success of this search is a separate question. The standard materialist line in response to such cosmological fine-tuning is to invoke multiple universes and view the success of this search as a selection effect: most searches ended without a life-permitting universe, but we happened to get lucky and live in a universe hospitable to life.

In any case, it’s possible to characterize search in a way that leaves the role of teleology and intelligence open without either presupposing them or deciding against them in advance. Mathematically speaking, search always occurs against a backdrop of possibilities (the search space), with the search being for a subset within this backdrop of possibilities (known as the target). Success and failure of search are then characterized in terms of a probability distribution over this backdrop of possibilities, the probability of success increasing to the degree that the probability of locating the target increases . . . .

[T]he important issue, from a scientific vantage, is not how the search ended but the probability distribution under which the search was conducted.

So, we see the issue of search in a space of possibilities can be pivotal for looking at a fairly broad range of subjects, bridging from the world of Easter egg hunts, to that of computing to the world of life forms, and onwards to the evident fine tuning of the observed cosmos and its potential invitation of a cosmological design inference.

That’s a pretty wide swath of issues.

However, the pivot of current debates is on the design theory controversy linked to the world of life. Accordingly Dembski focuses there, and it is worth pausing for a further clip so that we can see his logic (and not the too often irresponsible caricatures of it that so often are frequently used to swarm down what he has had to say):

[I]nformation is usually characterized as the negative logarithm to the base two of a probability (or some logarithmic average of probabilities, often referred to as entropy). This has the effect of transforming probabilities into bits and of allowing them to be added (like money) rather than multiplied (like probabilities). Thus, a probability of one-eighths, which corresponds to tossing three heads in a row with a fair coin, corresponds to three bits, which is the negative logarithm to the base two of one-eighths.

Such a logarithmic transformation of probabilities is useful in communication theory, where what gets moved across communication channels is bits rather than probabilities and the drain on bandwidth is determined additively in terms of number of bits. Yet, for the purposes of this “Made Simple” paper, we can characterize information, as it relates to search, solely in terms of probabilities, also cashing out conservation of information purely probabilistically.

Probabilities, treated as information used to facilitate search, can be thought of in financial terms as a cost — an information cost. Think of it this way. Suppose there’s some event you want to have happen. If it’s certain to happen (i.e., has probability 1), then you own that event — it costs you nothing to make it happen. But suppose instead its probability of occurring is less than 1, let’s say some probability p. This probability then measures a cost to you of making the event happen. The more improbable the event (i.e., the smaller p), the greater the cost. Sometimes you can’t increase the probability of making the event occur all the way to 1, which would make it certain. Instead, you may have to settle for increasing the probability to q where q is less than 1 but greater than p. That increase, however, must also be paid for . . . . [However,] just as increasing your chances of winning a lottery by buying more tickets offers no real gain (it is not a long-term strategy for increasing the money in your pocket), so conservation of information says that increasing the probability of successful search requires additional informational resources that, once the cost of locating them is factored in, do nothing to make the original search easier . . . .

Conservation of information says that . . .  when we try to increase the probability of success of a search . . .   instead of becoming easier, [the search] remains as difficult as before or may even . . . become more difficult once additional underlying information costs, associated with improving the search and [which are] often hidden . . .  are factored in . . . .

The reason it’s called “conservation” of information is that the best we can do is break even, rendering the search no more difficult than before. In that case, information is actually conserved. Yet often, as in this example, we may actually do worse by trying to improve the probability of a successful search. Thus, we may introduce an alternative search that seems to improve on the original search but that, once the costs of obtaining this search are themselves factored in, in fact exacerbate the original search problem.

So, where does all of this leave us?

A useful way is to do an imaginary exchange based on many real exchanges of comments in and around UD, here by clipping a recent addition to the IOSE Intro-Summary (which is also structured to capture an unfortunate attitude that is too common in exchanges on this subject):

__________

>>Q1: How then do search algorithms — such as genetic ones — so often succeed?

A1: Generally, by intelligently directed injection of active information. That is, information that enables searching guided by an understanding of the search space or the general or specific location of a target. (Also, cf. here. A so-called fitness function which more or less smoothly and reliably points uphill to superior performance, mapped unto a configuration space, implies just such guiding information and allows warmer/colder signals to guide hill-climbing. This or the equivalent, appears in many guises in the field of so-called evolutionary computing. As a rule of thumb, if you see a “blind” search that seemingly delivers an informational free lunch, look for an inadvertent or overlooked injection of active information. [[Cf. here, here.& here.]) In a simple example, the children’s party game, “treasure hunt,” would be next to impossible without a guidance, warmer/colder . . . hot . . . red hot. (Something that gives some sort of warmer/colder message on receiving a query, is an oracle.) The effect of such sets of successive warmer/colder oracular messages or similar devices, is to dramatically reduce the scope of search in a space of possibilities. Intelligently guided, constrained search, in short, can be quite effective. But this is designed, insight guided search, not blind search. From such, we can actually quantify the amount of active information injected, by comparing the reduction in degree of difficulty relative to a truly blind random search as a yardstick. And, we will see the remaining importance of the universal or solar system level probability or plausibility bound [[cf. Dembski and Abel, also discussion at ENV] which in this course will for practical purposes be 500 – 1,000 bits of information — as we saw above, i.e. these give us thresholds where the search is hard enough that design is a more reasonable approach or explanation. Of course, we need not do so explicitly, we may just look at the amount of active information involved.

Q2: But, once we have a fitness function, all that is needed is to start anywhere and then proceed up the slope of the hill to a peak, no need to consider all of those outlying possibilities all over the place. So, you are making a mountain out of a mole-hill: why all the fuss and feathers over “active information,” “oracles” and “guided, constrained search”?

A2: Fitness functions, of course, are a means of guided search, by providing an oracle that points — generally — uphill. In addition, they are exactly an example of constrained search: there is function present everywhere in the zone of interest, and it follows a generally well-behaved uphill-pointing pattern. In short, from the start you are constraining the search to an island of function, T, in which neighbouring or nearby locations: Ei, Ej, Ek, etc . . .  — which can be chosen by tossing out a ring of “nearby” random tries — are apt to go uphill, or get you to another local slope pointing uphill. Also, if you are on the shoreline of function, tosses that have no function will eliminate themselves by being obviously downhill; which means it is going to be hard to island hop from one fairly isolated zone of function to the next.  In short, a theory that may explain micro-evolutionary change within an island or cluster of nearby islands, is not simply to be extrapolated to one that needs to account for major differences that have to bridge large differences in configuration and function. This is not going to be materially different if the islands of function and their slopes and peaks of function grow or shrink a bit across time or even move bodily like glorified sand pile barrier islands are wont to, so long as such island of function drifting is gradual. Catastrophic disappearance of such islands, of course, would reflect something like a mass extinction event due to an asteroid impact or the like. Mass extinctions simply do not create new functional body plans, they sweep the life forms exhibiting existing body plans away, wiping the table almost wholly clean, if we are to believe the reports.  Where also, the observable islands of function effect starts at the level of the many isolated protein families, that are estimated to be as 1 in 10^64 to 1 in 10^77 or so of the space of Amino Acid sequences. As ID researcher Douglas Axe noted in a 2004 technical paper: “one in 10^64 signature-consistent sequences forms a working domain . . . the overall prevalence of sequences performing a specific function by any domain-sized fold may be as low as 1 in 10^77, adding to the body of evidence that functional folds require highly extraordinary sequences.” So, what has to be reckoned with, is  that in general for a sufficiently complex situation to be relevant to FSCO/I [[500 – 1,000 or more structured yes/no questions, to specify configurations, En . . . ], the configuration space of possibilities, W, is as a rule dominated by seas of non-functional gibberish configurations, so that the envisioned easy climb up Mt Improbable is dominated by the prior problem of finding a shoreline of Island Improbable.

Q3: Nonsense! The Tree of Life diagram we all saw in our Biology classes proves that there is a smooth path from the last universal common ancestor [LUCA] to the different body plans and forms, from microbes to Mozart. Where did you get such nonsense from?

A3: Indeed, the tree of life was the only diagram in Darwin’s Origin of Species. However, it should be noted that it was a speculative diagram, not one based on a well-documented, observed pattern of gradual, incremental improvements. He hoped that in future decades, investigations of fossils over the world would flesh it out, and that is indeed the impression given in too many Biology textbooks and popular headlines about found “missing links.” But, in fact, the typical tree of life imagery:

Fig. G.11c, anticipated: A typical, popular level tree of life model/illustration. (Source.)

. . . is too often presented in a misleading way. First, notice the skipping over of the basic problem that without a root, neither trunks nor branches and twigs are possible. And, getting to a first, self-replicating unicellular life form — the first universal common ancestor, FUCA — that uses proteins, DNA, etc through the undirected physics and chemistry of Darwin’s warm little electrified pond full of a prebiotic soup or the like, continues to be a major and unsolved problem for evolutionary materialist theorising. Similarly, once we reckon with claims about “convergent evolution” of eyes, flight, whale/bat echolocation “sonar” systems, etc. etc., we begin to see that “everything branches, save when it doesn’t.” Indeed, we have to reckon with a case where on examining the genome of a kangaroo (the tammar wallaby), it was discovered that “In fact there are great chunks of the [[human] genome sitting right there in the kangaroo genome.” The kangaroos are marsupials, not placental mammals, and the fork between the two is held to be 150 million years old. So, Carl Wieland of Creation Ministries incorporated, was fully in his rights to say: “unlike chimps, kangaroos are not supposed to be our ‘close relatives’ . . . . Evolutionists have long proclaimed that apes and people share a high percentage of DNA. Hence their surprise  at these findings that ‘Skippy’ has a genetic makeup similar to ours.”  Next, so soon as one looks at molecular similarities — technically, homologies (and yes, this is an argument from similarity, i.e analogy in the end) — instead of those of gross anatomy, we run into many, mutually conflicting “trees.” Being allegedly 95 – 98+% Chimp in genetics is one thing, being what, ~ 80% kangaroo or ~ 50% banana or the like, is quite another. That is, we need to look seriously at the obvious alternative from the world of software design: code reuse and adaptation from a software library for the genome. Worse, in fact the consistent record from the field (which is now “almost unmanageably rich” with over 250,000 fossil species, millions of specimens in museums and billions in the known fossil beds), is that we do NOT observe any dominant pattern of origin of body plans by smooth incremental variations of successive fossils. Instead, as Steven Jay Gould famously observed, there are systematic gaps, right from the major categories on down. Indeed, if one looks carefully at the tree illustration above, one will see where the example life forms are: on twigs at the end of branches, not the trunk or where the main branches start. No prizes for guessing why. That is why we should carefully note the following remark made in 2006 by W. Ford Doolittle and Eric Bapteste:

Darwin claimed that a unique inclusively hierarchical pattern of relationships between all organisms based on their similarities and differences [the Tree of Life (TOL)] was a fact of nature, for which evolution, and in particular a branching process of descent with modification, was the explanation. However, there is no independent evidence that the natural order is an inclusive hierarchy, and incorporation of prokaryotes into the TOL is especially problematic. The only data sets from which we might construct a universal hierarchy including prokaryotes, the sequences of genes, often disagree and can seldom be proven to agree. Hierarchical structure can always be imposed on or extracted from such data sets by algorithms designed to do so, but at its base the universal TOL rests on an unproven assumption about pattern that, given what we know about process, is unlikely to be broadly true. This is not to say that similarities and differences between organisms are not to be accounted for by evolutionary mechanisms, but descent with modification is only one of these mechanisms, and a single tree-like pattern is not the necessary (or expected) result of their collective operation . . . [[Abstract, “Pattern pluralism and the Tree of Life hypothesis,” PNAS February 13, 2007 vol. 104 no. 7 2043-2049.]

Q4: But, the evidence shows that natural selection is a capable designer and can create specified complexity. Isn’t that what Wicken said to begin with in 1979 when he said that “Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order’ . . .”?

A4: We need to be clear about what natural selection is and does. First, you need a reproducing population, which has inheritable chance variations [[ICV], and some sort of pressure on it from the environment, leading to gradual changes in the populations because of differences in reproductive success [[DRS] . . . i.e. natural selection [[NS] . . . among varieties; achieving descent with modification [[DWM]. Thus, different varieties will have different degrees of success in reproduction: ICV + DRS/NS –> DWM. However, there is a subtlety: while there is a tendency to summarise this process as “natural selection, “this is not accurate. For the NS component actually does not actually ADD anything, it is a short hand way of saying that less “favoured” varieties (Darwin spoke in terms of “races”) die off, leaving no descendants. “Selection” is not the real candidate designer. What is being appealed to is that chance variations create new varieties. So, this is the actual supposed source of innovation — the real candidate designer, not the dying off part. That puts us right back at the problem of finding the shoreline of Island Improbable, by crossing a “sea of non-functional configurations” in which — as there is no function, there is no basis to choose from. So, we cannot simply extrapolate a theory that may relate to incremental changes within an island of function, to the wider situation of origin of functions. Macroevolution is not simply accumulated micro evolution, not in a world of complex, configuration-specific function. (NB: The suggested “edge” of evolution by such mechanisms is often held to be about the level of a taxonomic family, like the cats or the dogs and wolves.)

Q5: The notion of “islands of function” is Creationist nonsense, and so is that of “active information.” Why are you trying to inject religion and “God of the gaps” into science?

A5: Unfortunately, this is not a caricature: there is an unfortunate  tendency of Darwinist objectors to design theory to appeal to prejudice against theistic worldviews, and to suggest questionable motives, that are used to cloud issues and poison or polarise discussion. But, I am sure that if I were to point out that such Darwinists often have their own anti-theistic ideological agendas and have sought to question-beggingly redefine science as in effect applied atheism or the like, that would often be regarded as out of place. Let us instead stick to the actual merits. Such as, that since intelligent designers are an observed fact of life, to explain that design is a credible or best causal explanation in light of tested reliable signs that are characteristic of design, such as FSCO/I, is not an appeal to gaps. Similarly, to point to ART-ifical causes that leave characteristic traces by contrast with those of chance and/or mechanical necessity, is not to appeal to “the supernatural,” but to the action of intelligence on signs that are tested and found to reliably point to it. Nor, is design theory to be equated to Creationism, which can be seen as an attempt to interpret origins evidence in light of what are viewed as accurate record of the Creator. The design inference works back from inductive study of signs of chance, necessity and art, to cases where we did not observe the deep past, but see traces that are closely similar to what we know that the only adequate, observed cause is design. So also, once we see that complex function dependent on many parts that have to be properly arranged and coupled together, sharply constrains the set of functional as opposed to non-functional configurations, the image of “islands of function” is not an unreasonable way to describe the challenge. Where also, we can summarise a specification as a structured list of YES/NO questions that give us a sufficient description of the working configuration. Which in turn gives us a way to understand Kolmogorov-Chaitin complexity or descriptive complexity of a bit-string x, in simple terms: “the length of the shortest program that computes x and halts.” This can be turned into a description of zones of interest T that are specified in large spaces of possible configurations, W. If there is a “simple” and relatively short description, D, that allows us to specify T without in effect needing to list and state the configs that are in T, E1, E2, . . En, then T is specific. Where also, if T is such that D describes a configuration-dependent function, T is functionally specific, e.g. strings of ASCII characters in this page form English sentences, and address the theme of origins science in light of intelligent design issues. In the — huge! — space of possible ASCII strings of comparable length to this page (or even this paragraph), such clusters of sentences are a vanishingly minute fraction relative to the bulk that will be gibberish. So also, in a world where we often use maps or follow warmer/colder cues to find targets, and where if we were to blindly select a search procedure and match it at random to a space of possibilities, we would be at least as likely to worsen as to improve odds of success relative to a simple blind at-random search of the original space of possibilities, active information that gives us an enhanced chance of success in getting to an island of function is in fact a viable concept.>>

__________

So, it seems that in the defined sense, conservation of information, search, active information, Kolmogorov complexity speaking to narrow zones of specific function T in wide config spaces W,  the viability of these concepts in the face of drift, etc. are coherent, relevant to the scientific phenomena under study, and important. Where, the pivotal challenge is that for complex, functionally specific organisation and associated or implied information, there is but one empirically — and routinely — known source: intelligence. Let us see if further discussion of same will now proceed on reasonable terms. END

PS: Since we are going to pause and markup JoeF’s article JoeG makes reference to in comment no 1, let me give a free plug to the ARN tee shirt (and calendar and prints), highlighting the artwork, under the doctrine of fair use (as it has become material to an exchange):

The ad blurb in part reads:

A recent book attacking intelligent design (Intelligent Thought: Science vs. the Intelligent Design Movement, ed. John Brockman, Vintage Press, May 2006), , has chapters by most of the big names in evolutionary thought: Daniel Dennett, Richard Dawkins, Jerry Coyne, Steven Pinker, Lee Smolin, Stuart A. Kauffman and others. In the introduction Brockman summarizes the situation from his perspective: materialistic Darwinism is the only scientific approach to origins, and the “bizarre” claims of “fundamentalists” with “beliefs consistent with those of the Middle Ages” must be opposed. “The Visigoths are at the gates” of science, chanting that schools must teach the controversy, “when in actuality there is no debate, no controversy.”

While Brockman intended the “Visigoths” reference as an insult equating those who do not embrace materialistic Darwinism to uneducated barbarians, he has actually created an interesting analogy of the situation, and perhaps a prophetic look at the future. For it was the Visigoths of the 3rd and 4th centuries that were waiting at the gates of the Roman Empire when it collapsed under its own weight. For years the Darwinists in power have pretended all is well in the land of random mutation and natural selection and that intelligent design should be ignored. With this book (and several others like it), they are attempting to both laugh and fight back at the ID movement. Mahatma Gandhi summarized the situation well with his quote about the passive resistive movement: “First they ignore you, then they laugh at you, then they fight you, then you win.”

Worth thinking about.

Comments
KN,
the problem here is, if someone puts forth a theory of ‘naturalized intentionality,’ and we say, “but that’s not real intentionality, because it doesn’t fit our intuitions about what intentionality is!”, how are we to adjudicate between the theory and our pre-theoretic intuitions?
I don't think this is a fair way to put it. Sure, intuitions play a role, but they also play a role in the 'naturalized' attempts: go down the chain and eventually you're going to find someone going 'the world just has to be like that!' or 'the world can't be that way!' The criticisms of naturalized intentionality often involve showing their inadequacy, picking out contradictions, or showing that the move made can barely be called 'naturalistic' at all. I actually think all projects to 'naturalize' most things are pointless, because 'naturalism' has been put through the wringer too many times. Anyway, I think you may be taking on too much here. You seem to want to make use of Churchland's argument - but then you start talking about 'belief' again, which Churchland afaik rejects. I don't think you can just take Churchland's talk of 'maps' and 'accuracy' and then just graft 'belief' on to it - do that, and you're engaged in a radically different project than he is. As for the Patricia Churchland quote, I think what she was getting at isn't quite what you think. Evolutionary processes do not select for 'truth' - they select for survival. I don't see how you can read her to be talking about the duties of philosophers when the context is her describing evolutionary processes.nullasalus
April 10, 2013
April
04
Apr
10
10
2013
08:20 PM
8
08
20
PM
PDT
@StephenB (61), I could not agree more.
SB #61: I don’t think that approach will work for reasons stated earlier. If an individual naturalist, who does not know that an apple is an apple, interacts with a million other naturalists, each of whom is ignorant in the same way, how does social interaction and time cause everyone concerned to know the apple for what it is?
Typically the naturalist is under the false notion that one can fabricate a whole [the concept of the apple] from parts [plurality of partial fallible perspectives].
SB #61: He cannot simply sense all the features of x and y, add them up, and, in each case, assign a name to their sum total.
Indeed he cannot, but this is naturalism's core business!
SB #61: He must apprehend the thing as being what it is. He must know what it is that unifies those features that help to describe it. Otherwise, he may describe the thing, but he will never know the thing as it is.
Indeed. Understanding is about the whole – not the parts.
SB #61: Why not just accept the fact that an apple is an apple?
Indeed. Why not accept that the whole is more than the sum of its parts?Box
April 10, 2013
April
04
Apr
10
10
2013
08:06 PM
8
08
06
PM
PDT
That should be "between [two] naturalists."StephenB
April 10, 2013
April
04
Apr
10
10
2013
07:00 PM
7
07
00
PM
PDT
Kantian Naturalist
I say that because a plurality of cognitive subjects can share their perspectives, and with that comes an drastic increase in our grasp of objective reality.
I don't think that approach will work for reasons stated earlier. If an individual naturalist, who does not know that an apple is an apple, interacts with a million other naturalists, each of whom is ignorant in the same way, how does social interaction and time cause everyone concerned to know the apple for what it is?
Now, there is the question as to whether any cognitive grasp of objective reality is fully intelligible in light of “naturalism”.
Since naturalism does not recognize the ontological distinction between the knower and the thing known, that would, indeed, be a problem.
But I don’t see why the Churchlandian account — synaptically-encoded feature-space mappings that stand in a homomorphic relation to the environmental features thereby mapped, and actualized through the causal nexus between brain, body, and world — isn’t at least a promising explanation of the cognitive grasp of objective reality enjoyed by many different kinds of animals, including human beings.
In order to grasp objective reality, the knower must apprehend the essence of the thing known, not simply its features. Otherwise, he will never understand the difference between (x) and (y). He cannot simply sense all the features of x and y, add them up, and, in each case, assign a name to their sum total. He must apprehend the thing as being what it is. He must know what it is that unifies those features that help to describe it. Otherwise, he may describe the thing, but he will never know the thing as it is. If you sense the color of your best friend's eyes, the texture of his hair, and the shape of his body, your sense knowledge alone provides a radically incomplete picture. Unless your intellectual knowledge informs you that he is a human being and not a giraffe, you do not really know him for "what" he is. In other words, you apprehend reality only if you know in what way he is different from every other human (sense knowledge) and in what way he is the same as every other human (intellectual knowledge).
One interesting wrinkle in Churchland’s story is that the structure-preserving relation is homomorphic rather than strictly isomorphic, as Sellars had insisted. I would like to say that this makes a big difference, but I’m not entirely sure what it is. But I do want to retain Sellars’ insistence, apparently rejected by Churchland, that “signifying” is different from “picturing” — the former being the proper home of notions such as “truth” and “meaning,” and the latter being the proper home of the relation between conceptual frameworks and the world.
It sounds more like an innocent intramural disagreement between to naturalists who are far more serious about the prospect of joining hands to reject the rationality of essentialism.
So it is indeed quite central to my pragmatism that nothing of the discursive order, just as such, stands in a representational relation to anything in the natural order, just as such. However, I do worry that this commitment ultimately derives from taking for granted the conception of nature grounded in modern science, and there could be very good reasons for rejecting, or at least questioning, the dominance of that particular conception.
Why not just accept the fact that an apple is an apple?StephenB
April 10, 2013
April
04
Apr
10
10
2013
06:47 PM
6
06
47
PM
PDT
I say that because a plurality of cognitive subjects can share their perspectives, and with that comes an drastic increase in our grasp of objective reality.
How does a plurality of inaccurate perspectives increase our grasp of objective reality? From one perspective, I'm holding a banana. From another, I'm holding love. From another, I'm holding nothing. From another, I'm holding the universe. Therefore, the objective reality is? _________________Phinehas
April 10, 2013
April
04
Apr
10
10
2013
06:26 PM
6
06
26
PM
PDT
I'm not sure what the point of that is supposed to be, KF -- nothing I've said anywhere suggests that, on my view, we have a completely correct grasp of objective reality at any given time. My point was that -- and I thought I'd made this perfectly clear, no? -- we must have a partial grasp of objective reality in order for there to be the fallible-but-corrigible process that is empirical inquiry, and that that partial grasp is intelligible on naturalistic terms, as per Churchland's account of neurobiological representations. In fact, on his account, any creature that has any sort of synaptically-encoded feature-space mapping will, just for that reason, have some cognitive grasp of objective reality. That strikes me as eminently plausible and very likely true -- whereas I take it that it strikes most of you here as completely daft -- and so the conversation continues!Kantian Naturalist
April 10, 2013
April
04
Apr
10
10
2013
05:47 PM
5
05
47
PM
PDT
KN: Pardon a simple case -- such as, how "everybody" knows that the objectors to Columbus thought the world was flat and he proved it was round? In short, a strong consensus can be spectacularly in error. KFkairosfocus
April 10, 2013
April
04
Apr
10
10
2013
05:18 PM
5
05
18
PM
PDT
StephenB, yes, I think I'm willing to just deny that our collective cognitive achievements are "built on" our individual ones -- at least, I'll deny it if "built on" is understood in a merely additive sense. I say that because a plurality of cognitive subjects can share their perspectives, and with that comes an drastic increase in our grasp of objective reality. That is, our very capacity to grasp objective reality as objective, to take it as objective-for-us, is interdependent with our capacity to engage in intersubjective discourse -- to regard each other as cognitive subjects. Now, there is the question as to whether any cognitive grasp of objective reality is fully intelligible in light of "naturalism". But I don't see why the Churchlandian account -- synaptically-encoded feature-space mappings that stand in a homomorphic relation to the environmental features thereby mapped, and actualized through the causal nexus between brain, body, and world -- isn't at least a promising explanation of the cognitive grasp of objective reality enjoyed by many different kinds of animals, including human beings. One interesting wrinkle in Churchland's story is that the structure-preserving relation is homomorphic rather than strictly isomorphic, as Sellars had insisted. I would like to say that this makes a big difference, but I'm not entirely sure what it is. But I do want to retain Sellars' insistence, apparently rejected by Churchland, that "signifying" is different from "picturing" -- the former being the proper home of notions such as "truth" and "meaning," and the latter being the proper home of the relation between conceptual frameworks and the world. So it is indeed quite central to my pragmatism that nothing of the discursive order, just as such, stands in a representational relation to anything in the natural order, just as such. However, I do worry that this commitment ultimately derives from taking for granted the conception of nature grounded in modern science, and there could be very good reasons for rejecting, or at least questioning, the dominance of that particular conception.Kantian Naturalist
April 10, 2013
April
04
Apr
10
10
2013
05:00 PM
5
05
00
PM
PDT
Yes, intentionality is going to be a big problem here, and I'm glad you raised it. I used to be fairly confident that intentionality could be "naturalized," one way or the other, but now I'm not so sure -- the problem here is, if someone puts forth a theory of 'naturalized intentionality,' and we say, "but that's not real intentionality, because it doesn't fit our intuitions about what intentionality is!", how are we to adjudicate between the theory and our pre-theoretic intuitions? My preference is for the theory over the intuitions, but not always. There does seem to be something basically right about the thought that the vocabulary of agency -- Sellars' "manifest image" -- has a kind of transcendental priority over empirical descriptions and explanations, and that priority cannot be easily accommodated by "naturalism". So far I've been concerned with explicating Churchland's neurosemantics. But I've also indicated here and there that I have some reservations about it, and now's the time to make those reservations explicit: I don't think that Churchland's neurosemantics is a really a theory of semantics. I think that when it comes to semantic content, I far prefer Robert Brandom's account of inferential semantics, wherein semantic content is something done by persons, not by brains, insofar as those persons are members of a community bound together by a shared linguistic tradition. On this account, the content of a concept is constituted by its inferential role -- what judgments it licenses, what judgments are incompatible with it, and so on. What I think Churchland has given us is not really a semantic theory at all, although it is a theory of representations -- in effect he shows us how to treat "representation" as a biological category. (Much the same has been said about Ruth Millikan's work, with which I'm not yet familiar.) The reason why I resist calling Churchland's account a semantic account is because synaptically-encoded feature-space maps do not, all by themselves, participate in norm-governed inferences. However, I do think that Churchland is right to resist the suggestion that brains are merely syntactical. At the heart of my thinking about these (and, indeed, many other) topics is the distinction between conceptual explication and causal explanation. A conceptual explication specifies what's going on conceptually, e.g. specifying what other concepts we need to understand in order to have a firm and clear grasp of some problematic target-concept. A causal explanation specifies what various causal powers (objects, properties, etc.) must be realized in order to bring about the phenomenon referred to by some concept. Here's an example -- solubility. A conceptual explication of solubility would be, "x is soluble in y if and only if, if x is placed in y, then (ceteris paribus) it would dissolve". A causal explanation of solubility would involve talking about the distribution of positive and negative charges over molecular surfaces. So here too -- Churchland's feature-space mappings may causally explain semantic content, but they aren't the same concept as semantic content -- no more than distribution of positive and negative charges 'means the same thing as' solubility. As for Patricia Churchland's quote (from her "Epistemology in the Age of Neuroscience", 1987, The Journal of Philosophy), I don't take her to be saying that truth is epiphenomenal or whatever, but rather that the job of the naturalistic epistemologist is to first figure out what semantic content looks like "in the order of nature", and then figure out how the acquisition of language affects pre-linguistic content. Only once language has come on the scene do we have anything like judgments, a fortiori only with the advent of language are there any mental contents with truth-value. That's her view, as I understand it. As for me, it doesn't seem right to say that non-discursive animals lack beliefs and desires -- rather, my own view is that there is a kind of "opacity" to their beliefs and desires; we know that they have them, but we cannot tell what they are, except within certain rough- and-ready approximations. But I think that there's basically the same story going on with non-discursive cognitive subjects as there is with us discursive cognitive subjects -- we are justified in attributing true beliefs to them insofar as they have generally reliable cognitive maps of their environments. My cats have 'true beliefs' about where the food bowl is, and 'false beliefs' about how easy it would be to catch the birds outside my apartment. (They are both indoor cats; I don't let them out.) Notice, again, that I'm trying to cash out "reliable" in terms of the homomorphic relation between domains and feature-spaces, not in terms of overall reproductive success. (By that criterion my cats are both dismal failures, since they are both fixed.)Kantian Naturalist
April 10, 2013
April
04
Apr
10
10
2013
02:44 PM
2
02
44
PM
PDT
KN,
I don’t think I’m equivocating on “reliable,” because we can distinguish between what constitutes a reliable cognitive map and the usual consequences of a reliable map. A set of neurobiological processes is functioning as a reliable map if there is a homomorphism between those processes and some part of the environment.
a second-order resemblance — the system of relations at the neurobiological level stands in a homomorphic relation to the system of relations at the physical level.
Here's another problem: when you talk about a 'homomorphism between those processes and some part of the environment', you're off into intentionality discussions. Both the first order and the second order resemblance that you're talking about here can't be intrinsic by Churchland's view - they would have to be derived. But a derived relation wouldn't be of use anyway in this context, at least not without tracing things back to the intrinsic - and Churchland, as far as I know, will not go for intrinsic meaning.
I’ve gone into this point in order to stress the key idea behind Churchland’s “neurosemantics” (as he calls it): it’s not that we eliminate semantic content, but that we use neurobiology to construct a better theory of what semantic content really is.
I've seen these kinds of responses before, and they've never been compelling. The difference between a reduction and an elimination is, at times, very thin. An eliminative materialist can deny they're eliminative about anything and argue that they're simply trying to show everyone what the mental 'really is', and it turns out that the mental is nothing but the mechanistic physical.
Now, Churchland does, of course, think that animals that can reliably map their environments will tend to leave more offspring than those than cannot — because reliably mapping the environment allows the organism to coordinate its perceptual ‘input’ and motor ‘output’, and so better accomplish all of its practical activities, including reproduction. What gets mapped, and how, depends on the particular environment, the kind of organism, and their mode of interaction. Oysters map their environment far differently from lobsters or tigers. In other words, the kind of reliability that Churchland is talking about is different from the kind of reliability that Plantinga concedes. The kind of reliability that Plantinga concedes seems to me to be a reliability constituted by reproductive success, aka staving off extinction one generation at a time. That’s “purely external,” so to speak — semantic content doesn’t matter, or seems not to. Whereas the kind of reliability that Churchland is talking about is a reliability of semantic content — it’s just non-propositional semantic content.
Again, whether we're talking about maps or beliefs, we're going to have to ask just how these things are constituted given such and such metaphysics. For Churchland, from what I read, there is no intrinsic meaning in the brain - not for a belief, and not for a map. A map whose semantic content is entirely derived won't be of any use against the EAAN. This I bring up before pointing out that Churchland, again, is denying the existence of 'beliefs' altogether - which makes his reply not exactly the most compelling one right out of the gates. Further, saying that such and such neural states contain 'non-propositional semantic content' itself won't do much. Keep in mind that Plantinga's argument didn't assume the absolute invisibility of semantic content to selection - he points out problems both with epiphenomenal semantic content (which Churchland's scheme may well fall prey to) as well as situations where semantic content can be said to enter the causal chain. Here's one quote to consider from Patricia Churchland: Improvements in sensorimotor control confer an evolutionary advantage: a fancier style of representing [the world] is advantageous so long as it is geared to the organism's way of life and enhances the organism's chances of survival. Truth, whatever that is, definitely takes the hindmost. nullasalus
April 10, 2013
April
04
Apr
10
10
2013
12:52 PM
12
12
52
PM
PDT
Kantian Naturalist
I’d like to see a response from you before I’m convinced that I’ve been refuted.
OK. Your original point was that even if the truth-tracking character of our native cognitive mechanisms is unreliable, artificial mechanisms in the form of institutions and practices can provide warrant for scientific theories. Collective cognition can, one gathers, compensate for the lack found in individual cognition. As I pointed out, this is logically impossible. The reliability of Collective cognition builds on the reliability of individual cognition and cannot be separated from it. In keeping with that point, institutions and practices do not create anything or evaluate anything. On the contrary, they are the thing being created by the people who bring them into being. At 42, you write this:
Put otherwise, all that a good naturalist (like Churchland) need be committed to is that our ordinary belief-formation mechanisms are generally reliable about some things, and that scientific procedures are highly artificial (so not “natural”, in one sense) but highly reliable techniques for arriving at much more reliable (though often counter-intuitive) beliefs.
You seem to be confusing the idea of "perfect" knowledge, which no one has about anything, and "reliable" knowledge, which all rational people have in the context of evaluating what is true from what is false. In a naturalistic, neo-Darwinian framework, there is no reason to accept (and every reason to reject) the idea that any beliefs at all are reliable. As I pointed out @30, “If individual belief systems are unreliable from a Darwinist perspective, then so are the institutions and practices that build on and derive from those individual belief systems. Accordingly, the amount of time that it takes for an institution to develop is irrelevant since it would simply be the case of newer unreliable beliefs being piled on top of older unreliable beliefs.” Your comment @42 does not address the problem. You write:
Individual cognitive subjects may indeed have some cognitive grasp of objective reality, but communal interactions (esp. the distinctive kind mediated by a shared language) make it possible for them to have some warrant that they have such a grasp.
Again, with respect to the metaphysical principles that guide science, you are confusing perfect knowledge with reliable knowledge. Our grasp of reality (knowing things in themselves as they are) is either reliable or it isn't; there is no middle ground. We either know an apple "as an apple" or we do not. We either know that it is not a banana, or we do not. Reason's rules are either true or they are false, but if they are false, then there is no such thing as true and false, meaning there is no rationality. Communal interactions involving unreliable beliefs and unreliable evaluations about beliefs cannot provide any warrant for true beliefs. Among other things, the emerging communal system of checks and balances that is supposed to do the testing is, itself, built on the unreliable belief systems of individuals and cannot, therefore, be trusted. The evaluative mechanism would be no more reliable than the beliefs that are being evaluated and no progress would be possible. A million instances of unreliable input cannot generate one reliable belief. The whole idea is preposterous. There is such a thing as “synergy” or the “assembly effect bonus,” which occurs when goal-oriented people forge a consensus in a spirit of true dialogue. But that dynamic is built on rational interaction, which, in turn, is built on reason’s rules, which, in turn, constitute the basic reliable beliefs by which all other beliefs are evaluated. Because you deny reason’s rules, you have no rational standard for evaluating any belief system. On the contrary, you build your concept of social interaction on the absurd idea that rationality is determined by communal norms, which are always changing and, therefore, useless as a standard for meaningful dialogue.
So having more than one cognitive subject is not merely additive, as StephenB seemed to suggest, but rather results in a radical transformation of one’s epistemic situation.
This is a good example of a “poof--there it is” argument. There is not (nor could there ever be) any mechanism by which an amalgamation or accumulation of unreliable beliefs could be transformed into a true belief. Without a pre-existing self-evident truth, such as the Law of Non-Contradiction serving as a rational base, there is no way to separate true beliefs from false beliefs. If a community of naturalists denies that truth, social interaction and time will not “transform” them into rational people. Here is the sociological and anthropological fact: A community of rational people reinforces rationality; a community of irrational people reinforces irrationality. As a member of the community of naturalists, you promote irrationality, albeit in a refreshingly congenial way.StephenB
April 10, 2013
April
04
Apr
10
10
2013
11:27 AM
11
11
27
AM
PDT
In re: StephenB @ 50:
Are we supposed to forget that the main element of your original claim, which I refuted, was that, in the case of unreliable human cognition, we can confer more warrant on the Collective Scientific Community than on a single individual?
In response to your skeptical challenge @ 30 (repeated @ 35 and 40), I responded at my 42, where I gave a sketch as to why we have good reasons to believe that intersubjective or communal cognitive achievements are more reliable than those of individual cognitive subjects. I'd like to see a response from you before I'm convinced that I've been refuted.Kantian Naturalist
April 10, 2013
April
04
Apr
10
10
2013
07:51 AM
7
07
51
AM
PDT
Nullasalus, First, let me assure you that I don't find your tone aggressive at all -- critical, yes, but by no means aggressive. In fact, I quite enjoy our conversations, and I get a lot of them. Now, to work!
That’s an equivocation on the word ‘reliable’. From the evolutionary perspective that Churchland is offered up – at least as you describe it – a ‘reliable’ “synaptically-encoded feature-space map” is measured in terms of fitness and survival. Is the population of the species with such and such map surviving and thriving more than the nearest alternative? Yes? Well, then it’s a reliable map.
I don't think I'm equivocating on "reliable," because we can distinguish between what constitutes a reliable cognitive map and the usual consequences of a reliable map. A set of neurobiological processes is functioning as a reliable map if there is a homomorphism between those processes and some part of the environment. Take, for example, color. On the neurological side, there is the space of all possible colors that are humanly perceivable. On the physical side, there is the range of electromagnetic frequencies to which our retinas are sensitive. And there is a homomorphism between them, as mediated by the kinds of cones in our retinas, how those cells send information to each other and to other parts of the brain, and so forth. This is not a first-order resemblance, as with Locke -- each bit of semantic content stands in some relation to some bit of external reality -- but rather a second-order resemblance --- the system of relations at the neurobiological level stands in a homomorphic relation to the system of relations at the physical level. I've gone into this point in order to stress the key idea behind Churchland's "neurosemantics" (as he calls it): it's not that we eliminate semantic content, but that we use neurobiology to construct a better theory of what semantic content really is. Now, Churchland does, of course, think that animals that can reliably map their environments will tend to leave more offspring than those than cannot -- because reliably mapping the environment allows the organism to coordinate its perceptual 'input' and motor 'output', and so better accomplish all of its practical activities, including reproduction. What gets mapped, and how, depends on the particular environment, the kind of organism, and their mode of interaction. Oysters map their environment far differently from lobsters or tigers. In other words, the kind of reliability that Churchland is talking about is different from the kind of reliability that Plantinga concedes. The kind of reliability that Plantinga concedes seems to me to be a reliability constituted by reproductive success, aka staving off extinction one generation at a time. That's "purely external," so to speak -- semantic content doesn't matter, or seems not to. Whereas the kind of reliability that Churchland is talking about is a reliability of semantic content -- it's just non-propositional semantic content. Put this way, we can see Churchland as rejecting Plantinga's implicit dichotomy between 'external' behaviors and 'internal' beliefs. There's a third category: non-propositional semantic contents that are realized as synaptically-encoded feature-space maps of the motivationally salient environment and that are causally efficacious in coordinating perceptual and motor activity. So this allows Churchland to put semantic content back in the causal nexus, and as such, it can be subject to selective forces. Two further questions: (1) do all organisms reliably map their environments? and (2) what about discursive semantic contents -- beliefs and desires -- where do they fit? (And what, after all, is there to say about truth and justification?) On the first question, I don't know what Churchland would say, but I myself would say that an absolute minimal requirement for cognition (qua reliable mapping of some environmental domain) is that there be intermediary neurons between the sensory receptors and the motor effectors. So I wouldn't say that bacteria or even complex eukaryotes would count as cognitive subjects. (Evan Thompson would disagree with me, and there are molecular biologists who think that even molecules are cognitive on some level. I confess that I find that view utterly baffling.) I'll return later on today to say something more about the relation between semantic content qua reliable mapping and semantic content qua propositional attitudes.Kantian Naturalist
April 10, 2013
April
04
Apr
10
10
2013
06:54 AM
6
06
54
AM
PDT
H'mm: Both these threads seem to have converged on a similar somewhat tangential focus. Let me post here too what I just noted to BD in ID Founds 17, noting that scientism underwritten by evolutionary materialist ideology dressed up in the lab coat and dominating science, in spite of serious issues of question begging, epistemological breakdown and more, is a context in which all the discussion proceeds. So, let us be bold enough to ask, whether we are today living in an evolutionary materialist cave of question-begging shadow shows presented by devotees of scientism dressed up in the holy lab coat, in the name of Big-S Science (how dare you doubt or question . . . ): ____________ >> In the universe of discourse we must address, the question of grounding the human mind as a reasonably effective cognitive system does arise. For, we have a persistent evolutionary materialism that seeks to pin mind down to brain and CNS in action. In that setting, the following from Leibniz's Monadology, i.e. the analogy of the mill [HT: Frosty], is quite apt:
14. The passing condition which involves and represents a multiplicity in the unity, or in the simple substance, is nothing else than what is called perception. This should be carefully distinguished from apperception or consciousness . . . . 16. We, ourselves, experience a multiplicity in a simple substance, when we find that the most trifling thought of which we are conscious involves a variety in the object. Therefore all those who acknowledge that the soul is a simple substance ought to grant this multiplicity in the monad . . . . 17. It must be confessed, however, that perception, and that which depends upon it, are inexplicable by mechanical causes, that is to say, by figures and motions. Supposing that there were a machine whose structure produced thought, sensation, and perception, we could conceive of it as increased in size with the same proportions until one was able to enter into its interior, as he would into a mill. Now, on going into it he would find only pieces working upon one another, but never would he find anything to explain perception. It is accordingly in the simple substance, and not in the compound nor in a machine that the perception is to be sought. Furthermore, there is nothing besides perceptions and their changes to be found in the simple substance. And it is in these alone that all the internal activities of the simple substance can consist.
We may bring this up to date by making reference to more modern views of elements and atoms, through an example from chemistry. For instance, once we understand that ions may form and can pack themselves into a crystal, we can see how salts with their distinct physical and chemical properties emerge from atoms like Na and Cl, etc. per natural regularities (and, of course, how the compounds so formed may be destroyed by breaking apart their constituents!). However, the real issue evolutionary materialists face is how to get to mental properties that accurately and intelligibly address and bridge the external world and the inner world of ideas. This, relative to a worldview that accepts only physical components and must therefore arrive at other things by composition of elementary material components and their interactions per the natural regularities and chance processes of our observed cosmos. Now, obviously, if the view is true, it will be possible; but if it is false, then it may overlook other possible elementary constituents of reality and their inner properties. Which is precisely what Leibniz was getting at. Moreover, as C S Lewis aptly put it (cf. Reppert's discussion here), we can see that the physical relationship between cause and effect is utterly distinct from the conceptual and logical one between ground and consequent, and thus we have no good reason to trust the deliverances of the first to have anything credible to say about the second. Or, as Reppert aptly brings out:
. . . let us suppose that brain state A, which is token identical to the thought that all men are mortal, and brain state B, which is token identical to the thought that Socrates is a man, together cause the belief that Socrates is mortal. It isn’t enough for rational inference that these events be those beliefs, it is also necessary that the causal transaction be in virtue of the content of those thoughts . . . [But] if naturalism is true, then the propositional content is irrelevant to the causal transaction that produces the conclusion, and [so] we do not have a case of rational inference. In rational inference, as Lewis puts it, one thought causes another thought not by being, but by being seen to be, the ground for it. But causal transactions in the brain occur in virtue of the brain’s being in a particular type of state that is relevant to physical causal transactions.
That is the naturalist's dilemma: he must use his mind to reason and must trust his capability to perceive accurately, and to know, but in his scheme of things, the ground on which such must stand is undercut by the frame of the system itself. His scheme becomes self referentially incoherent and I daresay, absurd. In many ways and by many paths, often projected to opponents of the schemes of the likes of a Crick or a Marx or Freud or a Skinner or a Dawkins etc, but on reflection the self referentiality, it is apparent that he same knife cuts both ways. Coming back to your own view so far as I have made it out, there is a similar position. For, your view reduces the world of our experiences of external reality to in effect a Plato's Cave delusion. No scheme that does that escapes self-referentiality and an explosively self-defeating spiral of challenges: why should we accept the credibility of perceptions, beliefs and arguments at level n +1 if those of levels 1 to n have fallen to the acid of doubt and dismissal? Instead, it seems much wiser to me to accept that the consensus of our senses, experiences and insights is capturing something real, however prone we are to err. Where indeed, that fact of reality itself turns into a pivot, a point of undeniably certain truth and warranted self-evident knowledge [to deny that error exists entails that error exists]. Thus schemes of thought that deny external reality as what is there to be in error about, or deny truth as that which accurately refers to reality, or dismiss knowledge as that which warrants beliefs concerning reality, in some cases to undeniable certainty, etc etc, all fail. In particular, the notion that suddenly, because we construct a system -- one dominated by a priori materialism often wearing the lab coat of scientism and its notions that ideologically materialist science embraces and reveals knowledge whilst metaphysics, epistemology, philosophy and "theology" can be derided and dismissed across the board as outdated and dubious speculations, ends up in question-begging and self-referential incoherence. The artificial construct, institutional science dominated by scientism and unexamined materialism (let's not fool ourselves), stand on a fatally cracked foundation. And, given just how widespread such schemes are in our day, that analysis on the implications of the undeniable reality that error exists therefore cuts a wide mowing swath indeed across the contemporary scene in the marketplace of ideas and values. Back to basics -- first principles of right reason, self evident truths, the possibility of real knowledge etc -- and a much more serious respect for old fashioned common good sense. It is time to notice that the chains of mental slavery have been snapped, and that we are no longer tied to the post in the cave of shifting shadow shows and manipulation power games that stage these. So, let us step up into the sunshine, and step out of the shade. For, this is one time that we can get a breakthrough to truth and to liberation thereby: you shall know the Truth, and the Truth shall make ye free. But, that requires understanding why the same Worthy, in that same context, warned his interlocutors, that they were in a situation where because he spoke the truth, they were unable to hear and understand what he had to say; indeed, were violently inclined to object and oppose. As he said in his famous Sermon on the Mount, the eyes are the lamp of the body, so if our eyes are good, we are full of light. but if they are bad, so bad that what we think is light is in fact darkness, how great is our darkness. I think Jesus knew exactly what the Greek thought on enlightenment so decisively shaped by Plato's parable of the Cave was all about, and the spreading influence of such ideas, e.g. from Sepphoris, a major Gentile centre in Galilee. So, he spoke at several levels, some corrective to Hebrew caves, and some to Greek, Roman and wider gentile ones. The gospel is light. Our problem is, that light has come but too often we choose darkness instead of light, as our deeds are evil and for fear that our addiction to evil will be exposed. Indeed, we are often confused by light, and even angered by it. Sometimes to the point of murderous rage. As, happened to him. But, that was Friday, Sunday was a-coming. Sunday has come, with the duly prophesied resurrection power (of which we have good warrant), so let us be as the one Jesus spoke of who lives by the truth -- yes, in the teeth of a day that derides and dismisses truth itself -- and so will walk into the light so that it may be manifest that what he does is done through the grace and redemption of God. And, so, let us restore our civilisation to light, rather than surrendering it to the ever advancing darkness. >> ______________ I trust this will be helpful. KFkairosfocus
April 10, 2013
April
04
Apr
10
10
2013
01:31 AM
1
01
31
AM
PDT
Kantian Naturalist ...
but the rational warrant of scientific theories lies in how we test theories, not in how we generate them!
Are we supposed to forget that the main element of your original claim, which I refuted, was that, in the case of unreliable human cognition, we can confer more warrant on the Collective Scientific Community than on a single individual?StephenB
April 9, 2013
April
04
Apr
9
09
2013
07:03 PM
7
07
03
PM
PDT
No rush, KN. Pardon me if my tone is aggressive. I'm not trying to be an ass - I just have never learned how to be anything but direct and forceful in some contexts. You're one of the more pleasant skeptics around here in a number of ways.nullasalus
April 9, 2013
April
04
Apr
9
09
2013
05:39 PM
5
05
39
PM
PDT
Denialism as usual KN,,, you cite a specific claim that "the warrant of scientific theories does not depend on the reliability of any untutored cognitive endowments of any individual human brain" And when shown that 'scientific cognitive endowments' are already present (i.e. made in the image of God), you deny it is relevant. How convenient to make up the rules as you go just so to preserve your atheistic belief system. Science practiced KN style: http://conversationsofchange.com.au/wp-content/uploads/2013/02/head_in_the_sand-461x307.jpgbornagain77
April 9, 2013
April
04
Apr
9
09
2013
05:39 PM
5
05
39
PM
PDT
Interesting response, Nullasalus! I'll have to think more on this and respond later -- probably tomorrow, because my brain is cooked for the night. Let me say this, though: I do reject Churchland's eliminative materialism, but for the following reason -- I don't think that "folk psychology" (propositional attitude ascriptions, etc.) is best understood as an empirical theory, so it's not something that could be replaced by a better empirical theory. But I do acknowledge that that puts me in a more difficult bind than Churchland's -- I have to confront questions that he can simply evade. (As for Rosenberg -- uggh!)Kantian Naturalist
April 9, 2013
April
04
Apr
9
09
2013
05:27 PM
5
05
27
PM
PDT
KN,
In terms of how Paul represents the situation, Plantinga notes that it is entirely conceivable that Paul’s psychological representations — what Paul believes to be the case — could be wildly off from what is the case. But notice the assumption: that when we talk about semantic content, that’s got to be in terms of beliefs and desires. And that’s the assumption that Churchland rejects, because on his view, Paul’s neurobiological processes are his semantic contents.
And this just opens you up to the exact difficulty that I mentioned. You say further:
So, to Nullasalus’ implicit challenge, “”could a discursive animal have generally reliable synaptically-encoded feature-space maps and yet have mostly false beliefs?”, the answer it seems to me has to be “No”. I say that because there cannot be a total and systematic discrepancy between “having generally reliable synaptically-encoded feature-space maps” and “having mostly true beliefs”, and that is because “having generally reliable synaptically-encoded feature-space maps” and “having mostly true beliefs” are really just two different ways of talking about the semantic content of discursive cognitive subjects.
That's an equivocation on the word 'reliable'. From the evolutionary perspective that Churchland is offered up - at least as you describe it - a 'reliable' "synaptically-encoded feature-space map" is measured in terms of fitness and survival. Is the population of the species with such and such map surviving and thriving more than the nearest alternative? Yes? Well, then it's a reliable map. But that's not the sort of reliability that Plantinga is casting doubt on. In fact, Plantinga seems to grant that you can have that kind of reliability in E&N - hence his examples of creatures that, despite having false beliefs, or irrational beliefs, or (I would personally add) even no beliefs whatsoever, you can still have behaviors and actions that are conducive to survival. In that sense, they are reliable. It just happens to not be a reliability anyone is concerned about. You say that the answer to you seems to be no. But frankly, the weight of evidence is on my side here: I can point at no shortage of creatures that engage in behavior which is, on the whole, individually or collectively beneficial to the survival of their population - despite them having no beliefs whatsoever, possibly no conscious awareness to speak of. (Do bacteria have beliefs? Etc.) My implicit challenge wasn't really a challenge: it was a statement, one that I can easily defend, and which I just provided some more evidence for. The best way you can cook what you're saying here, on behalf of Churchland, is that Churchland is an eliminativist about beliefs to begin with - so Plantinga's charge doesn't even get off the ground. Guys like Alex Rosenberg play this kind of card too, and we've kicked it around here before. To say that's not the most compelling response is putting it lightly - go ahead, try to make the argument that everyone here thinks they have beliefs, but they're actually mistaken. (I suppose, they can't be mistaken, because that would require they had a belief, therefore...) But once you're accepting the existence of beliefs to begin with, then the 'reliability' cashes out the way I've noted - and it's not a concern to Plantinga's EAAN. After all, the EAAN does not argue that we couldn't *survive* or even thrive in a reproductive sense given E&N. If anything, it assumes the opposite.nullasalus
April 9, 2013
April
04
Apr
9
09
2013
05:18 PM
5
05
18
PM
PDT
BornAgain77, that's an interesting article -- and very much in line with one of my philosophical heroes, John Dewey -- so thank you! But it doesn't really touch on the point I was making, which is about warrant. It's a nice point that children can engage in the epistemic habits of scientists (does our school system destroy that habit, I wonder?), but the rational warrant of scientific theories lies in how we test theories, not in how we generate them. The original article is by Allison Gopnik -- I'll check it out! (Interestingly, Gopnik also wrote The Philosophical Baby: What Children's Minds Tell Us About Truth, Love, and the Meaning of Life -- I vaguely recall that a friend of mine read it last year, when he became a father. I'll see what he thought of it.)Kantian Naturalist
April 9, 2013
April
04
Apr
9
09
2013
05:05 PM
5
05
05
PM
PDT
Churchland (again):
. . . the dominant scheme of representation in biological creatures generally, from the Ordovician to the present, is the internal map of a range of possible types of sensorily accessible environmental features. Not a sentence, or a system of them, but a map. Now a map, of course, achieves its representational successes by displaying some sort of homomorphism between its own internal structure and the structure of the objective domain it purports to portray. And unlike the strictly binary nature of sentential success (a sentence is either true or it's false), maps can display many different degrees of success and failure, and can do in many distinct dimensions of possible 'faithfulness', some of which will be relevant to the creature's practical (and reproductive) success, and many of which will not.
In other words, what Churchland calls "synaptically-encoded feature-space maps" carry out the Hard Work of representing the environment. Beliefs are, on his view, late-comers to the game -- they only arise with a shared language, and animals were reliably portraying their environments for hundreds of millions of years before that happy event. Now, one might ask, "could a discursive animal have generally reliable synaptically-encoded feature-space maps and yet have mostly false beliefs?" This is basically how Nullasalus takes up Plantinga's "Paul the hominid" case:
Arguably, Plantinga’s examples – the person who has irrational beliefs (or thoughts, if you like) but nevertheless engages in behavior that promotes survival – has an ‘accurate map’ insofar as it’s a map conducive to survival. If we live in a world of ‘accurate maps’ yet the reliability of our thoughts is low or inscrutable, I think you’d see why this doesn’t exactly threaten the EAAN.
(I've been told, though I don't know this for sure, that Plantinga calls his hominid "Paul" as a way of poking fun at Churchland.) Now, Plantinga himself formulates the "Paul the hominid" case in terms of external behavior -- Paul does, after all, run away from the tiger -- rather than in terms of what's going on in Paul's neurobiological mechanisms. In terms of how Paul represents the situation, Plantinga notes that it is entirely conceivable that Paul's psychological representations -- what Paul believes to be the case -- could be wildly off from what is the case. But notice the assumption: that when we talk about semantic content, that's got to be in terms of beliefs and desires. And that's the assumption that Churchland rejects, because on his view, Paul's neurobiological processes are his semantic contents. So, to Nullasalus' implicit challenge, ""could a discursive animal have generally reliable synaptically-encoded feature-space maps and yet have mostly false beliefs?", the answer it seems to me has to be "No". I say that because there cannot be a total and systematic discrepancy between "having generally reliable synaptically-encoded feature-space maps" and "having mostly true beliefs", and that is because "having generally reliable synaptically-encoded feature-space maps" and "having mostly true beliefs" are really just two different ways of talking about the semantic content of discursive cognitive subjects. (For non-discursive cognitive subjects, we have only the first way.) (Having written this up, I do worry that I'm making a slight conflation of Churchland and Davidson -- I'll run this past some friends and see what they think -- but it's good enough for the time being, I think.)Kantian Naturalist
April 9, 2013
April
04
Apr
9
09
2013
04:59 PM
4
04
59
PM
PDT
As to KN's comment:
Churchland’s claim, to be precise, is that the warrant of scientific theories does not depend on the reliability of any untutored cognitive endowments of any individual human brain. (I take it that that’s obviously true and invites no further argument.)
empirical evidence just ain't your friend KN:
Children Act Like Scientists - October 1, 2012 Excerpt: New theoretical ideas and empirical research show that very young children’s learning and thinking are strikingly similar to much learning and thinking in science. Preschoolers test hypotheses against data and make causal inferences; they learn from statistics and informal experimentation, and from watching and listening to others. The mathematical framework of probabilistic models and Bayesian inference can describe this learning in precise ways. http://crev.info/2012/10/children-act-like-scientists/
bornagain77
April 9, 2013
April
04
Apr
9
09
2013
04:47 PM
4
04
47
PM
PDT
Churchland seems committed to a fairly interesting and provocative assumption, which I confess I might not have even noticed if I hadn't been reading a lot of Hegel and C. I. Lewis lately. (Davidson makes a similar point in his triangulation argument, but I think the basic points come across without the form in which Davidson puts them.) The assumption is this: objectivity requires intersubjectivity. Put otherwise, communal social interactions can do something that merely individual cognizers cannot do: have warrant that their cognitive activities are bearing on objective reality. [Note: I am using "objective" to mean "independent of any particular cognitive subject", in contrast with "absolute", which I would use to mean "independent of all particular cognitive subjects". So in talking of our cognitive access to objective reality, I am not talking of our cognitive access to the God's-eye view, but of our access to how things are regardless of how any particular cognitive subject takes them to be. Thus construed, the converse of "objective" is "subjective", and the converse of "absolute" is "relative".] Now, why might this assumption seem reasonable? It seems reasonable because a community of cognitive subjects is a plurality that is able to share perspectives. So no individual cognitive subject is enclosed within its own perspective. (Think of Leibniz's monads.) Cognitive subjects are differentiated by virtue of embodiment, spatio-temporal location, and cognitive capacities (including, importantly, perceptual capacities). Cognitive subjects who able to exchange their perspectives through a shared language -- discursive cognitive subjects -- are thereby able to coordinate their orientations on objects and properties. (Non-discursive cognitive subjects do this as well -- e.g. a wolf-pack cooperating on a hunt -- but the kinds of social activities are much more limited, partly because of the kinds of cognitive mechanisms each animal has, and because they can transmit much less information to each other.) (Compare: "Did you hear that?" "Yeah, I did!" "What was that?" with "Did you hear that?" "No" "Oh, I thought I heard something.") Individual cognitive subjects may indeed have some cognitive grasp of objective reality, but communal interactions (esp. the distinctive kind mediated by a shared language) make it possible for them to have some warrant that they have such a grasp. So having more than one cognitive subject is not merely additive, as StephenB seemed to suggest, but rather results in a radical transformation of one's epistemic situation. (Including, it should be noted, one's epistemic situation with regard to one's self -- the very complex kind of self-consciousness that we enjoy is itself mediated a long and complex history of social transactions that begin in infancy.) And this process of mutual adjustment and coordination is not just diachronic, but also synchronic -- we can and do learn from the insights and errors of previous generations, seeing new ways of improving upon the former and avoiding the latter, in light of our own uptake of objective reality.Kantian Naturalist
April 9, 2013
April
04
Apr
9
09
2013
04:39 PM
4
04
39
PM
PDT
Its interesting to watch LYO emote his dissonance over these past weeks.
Actually, some of us find it boringDaniel King
April 9, 2013
April
04
Apr
9
09
2013
02:26 PM
2
02
26
PM
PDT
Kantian Naturalist
Churchland’s claim, to be precise, is that the warrant of scientific theories does not depend on the reliability of any untutored cognitive endowments of any individual human brain. (I take it that that’s obviously true and invites no further argument.)
Go ahead and humor me with an argument. If human cognition is unreliable as a truth tracker, how can it distinguish relevant data from irrelevant data, interpret the data rationally, or build a theory for which there is warrant?StephenB
April 9, 2013
April
04
Apr
9
09
2013
02:11 PM
2
02
11
PM
PDT
KN,
Churchland then proceeds to list several of the practices and technologies he has in mind, such as double-blind studies, testing for statistical significance, comparing theories against each other, directly comparing predictions with experimental data, and also the extensive augmentation of our sensory modalities with telescopes, microscopes, nucleic-acid sequencers, and radioactive dating.
And here's the problem: double-blind studies, tests for statistical significance, theory comparisons, comparing predictions to data, etc, all are or involve human beings making judgments, stating beliefs, providing arguments, etc. It's happening at step after step of the described scientific process. Theories do not pop into existence of their own accord - these are developed by humans. What counts as 'data' is not granted to us by the Magical Science Golem - this is decided by humans. Etc, etc. Again: I gave the example of the irrational, crazy man programming a computer. Can we suddenly trust the results of the computer, just because it IS a computer (It's complex!), despite it being built and programmed by a lunatic? If not, well, then you begin to see why Churchland's response isn't going to work.
Churchland’s claim, to be precise, is that the warrant of scientific theories does not depend on the reliability of any untutored cognitive endowments of any individual human brain. (I take it that that’s obviously true and invites no further argument.)
No, that's going to need to be unpacked. If Churchland's argument relies on the assumption that there are 'tutored cognitive endowments' - "someone went off and became a scientist, or did scientific experiments, or..." etc, then we're going to need an argument for what it is about these practices that (given the lack of reliability about our cognitive processes otherwise) make them immune to the problems Plantinga has argued affect us generally. Note that it's not going to do any good here to list off all the practices of science that you have confidence in, because each and every one of those practices is going to be fallible in the sense that someone can always draw the long conclusion either from them, or in the process of performing them. Re: Churchland's talk about 'maps' versus 'beliefs', I don't think this is going to help out at all on this subject. Arguably, Plantinga's examples - the person who has irrational beliefs (or thoughts, if you like) but nevertheless engages in behavior that promotes survival - has an 'accurate map' insofar as it's a map conducive to survival. If we live in a world of 'accurate maps' yet the reliability of our thoughts is low or inscrutable, I think you'd see why this doesn't exactly threaten the EAAN. Trying to argue that the reliability of our thoughts is not low or inscrutable on the grounds that reliability is measured in terms of a God's eye view of our actions and whether or not they're conducive to survival, *regardless of the accuracy of the particular thoughts or beliefs we have*, burns this as a response to Plantinga. There's another problem. The thrust of Churchland's response so far focuses on trying to insist that our belief that evolution is true can be justified on E&N, thanks to the practices of science. I've already argued/pointed out why I think this is going to fail, but beyond that, metaphysical and philosophical views are going to fall outside the scope of science. A reply which results in the conclusion of 'given E&N, our beliefs about evolutionary theory may be reliable, but our beliefs about naturalism are not' would be a pyrrhic victory to say the least.nullasalus
April 9, 2013
April
04
Apr
9
09
2013
12:05 PM
12
12
05
PM
PDT
Churchland's claim, to be precise, is that the warrant of scientific theories does not depend on the reliability of any untutored cognitive endowments of any individual human brain. (I take it that that's obviously true and invites no further argument.) So the Hard Questions here are: (1) just how much reliability must be assigned to our 'natural' cognitive mechanisms, in order for them to be so much as capable of generating the self-correcting process of empirical inquiry, of which the institutions and practices of modern science are a significant example? (2) is that reliability consistent with what we know from evolutionary theory and cognitive neuroscience? Notice, by the way, that Plantinga's key premise isn't "naturalism and evolution together entail that our cognitive mechanisms are unreliable" but rather "naturalism and evolution together entail that we cannot ascertain to what degree our cognitive mechanisms are reliable" -- as he puts it, given N&E, the probability of R is either low or inscrutable. In mulling over this conversation earlier, it struck me that there are two major points of contention between Plantinga and Churchland, one in epistemology and one in semantics. Epistemology: foundationalism or anti-foundationalism? Plantinga is a committed foundationalist (I think -- but correct me if I'm wrong about this) -- that is, he thinks that there's a stock of "properly basic beliefs," which cannot be argued for but which would be irrational to reject (the existence of other minds is one of this examples -- we can't justify our belief in other minds, but it would be irrational to just flat-out reject it). And other beliefs, such as our scientific beliefs, rest on this foundation of properly basic beliefs. They aren't indubitable, a la Descartes -- they are open to skeptical worries -- but it wouldn't make any sense to do so. (I believe that this is a rather deep and interesting point that Plantinga gets from Reid's response to Hume.) By contrast, Churchland is an anti-foundationalist, following in the model of Hegel, Peirce, and Sellars. (Churchland in fact did his undergrad senior thesis on Peirce and wrote his Ph.D. under Sellars.) Here's how Sellars puts the really key point:
. If I reject the framework of traditional empiricism, it is not because I want to say that empirical knowledge has no foundation. For to put it this way is to suggest that it is really "empirical knowledge so-called," and to put it in a box with rumors and hoaxes. There is clearly some point to the picture of human knowledge as resting on a level of propositions -- observation reports -- which do not rest on other propositions in the same way as other propositions rest on them. On the other hand, I do wish to insist that the metaphor of "foundation" is misleading in that it keeps us from seeing that if there is a logical dimension in which other empirical propositions rest on observation reports, there is another logical dimension in which the latter rest on the former. Above all, the picture is misleading because of its static character. One seems forced to choose between the picture of an elephant which rests on a tortoise (What supports the tortoise?) and the picture of a great Hegelian serpent of knowledge with its tail in its mouth (Where does it begin?). Neither will do. For empirical knowledge, like its sophisticated extension, science, is rational, not because it has a foundation but because it is a self-correcting enterprise which can put any claim in jeopardy, though not all at once.
(Churchland is also influenced by Quine's anti-foundationalist "web of belief," though I have reasons of my own for preferring Sellars over Quine.) Semantics: is semantic content a 'target' of natural selection? Well, that depends on just what semantic content is! Plantinga assumes (not without justification) that the bearers of semantic content are beliefs, and that since the 'pairings' between beliefs and behavior are (conceivably) arbitrary, and only behaviors can be selected against, beliefs are invisible to selection. By contrast -- and this is actually what is most radical in Churchland's view, I think, and a position that has considerable merit -- Churchland thinks that semantic contents are not identifiable with beliefs, but with patterns of neuronal activity (modeled in connectionist networks). There is, he thinks, a kind of semantic content that is evolutionarily more primitive than the distinctive kind of content we find in language ("linguaformal content," in his terms). He's quite happy to concede that notions like "belief" (or "desire") only make sense when talking about linguistic animals; only that there's a kind of semantic content of non-linguistic animals, the content involved in representing features of the environment, which is just what brains do. Now, since this kind of semantic content is non-propositional, it cannot be assigned truth-values. But it can still be regarded as reliable or unreliable by other criteria. So Churchland's neurosemantics really amounts to a rejection of Plantinga's initial assumption: that cognitive reliability is measured in terms of producing mostly true beliefs. Instead, Churchland proposes to measure cognitive reliability in terms of producing generally or good-enough maps of the environment. (Maps, of course, that are not used by by the organism but which the animal's behavior instantiates.) But since semantic content is, for Churchland, readily identifiable with patterns of neuronal activity, it can be a target of selection.Kantian Naturalist
April 9, 2013
April
04
Apr
9
09
2013
11:15 AM
11
11
15
AM
PDT
F/N: While the thread is off track a bit, it is an interesting off-track. I think the issue that is emerging is that there is a dearth of understanding an inference to best explanation case. Yes, we do have knowledge, we do reason logically and correctly, we do have reason to believe we have ability to access truth, though we sometimes err. So, what best explains such? Why? If we were designed to do so, that makes a far better sense than expecting a process that rewards mere survival, to produce such an entity as a being capable of abstract, logical, truthful reasoning and knowledge. Indeed, it seems the proposed mechanisms of genetic, cultural and social conditioning, would decisively undermine truth seeking and tracking capacity. And, that has been a very common argument by various types of materialists across the recent decades: Marxists seeing bourgeois institution induced false consciousness [so what about your own social-cultural class background, uncle Charlie?), Freudians seeing critics as suffering overly strict potty training [and Uncle Sig, what was your own like . . . ?), behaviourists seeing us as glorified rats trapped in operantly conditioning mazes (and Uncle Burrus, how's your part of the maze), and so forth. When I hear today's Dawkinsians decrying religion and religious upbringing as inducing borderline lunacy and worse, I wonder about what the implications of their favoured form of "free thought" so called, are, on the same terms. Especially when I see the sort of scientism that fails to see that the notion that "science is the only begetter of truth" is a self-refuting philosophical claim. Likewise, Crick's suggestion that "you're nothing but a pack of neurons" in a context that asserted that "your joys and your sorrows, your memories and your ambitions, your sense of personal identity and free will, are in fact no more than the behaviour of a vast assembly of nerve cells and their associated molecules" invited Johnson's retort that such claims implied a quiet excepting of the theorist who made such assertions. Then, some days back, someone was a bit heated in retort to my pointing out that the claim that there is an unbridgeable gulf between the world of our appearances and the world of things in themselves, is itself an implicit claim to know about the external world, which was what was being denied. And, as my home discipline has shown by having two major revolutions in 250 years, scientific theories are often more about empirical reliability of useful models than actually capturing the truth about the world. In short, Plantinga is not exactly writing in some weird abstract vacuum, he could easily have named some names and pulled skeletons out of a few closets. His challenge to account for reason and its deliverances per the surrounding evolutionary materialist frame, stands as significant. (My own 101 notes are here on for those who want a short and dirty summary, and my similar 101 on a better base for building worldviews is here on.) KFkairosfocus
April 9, 2013
April
04
Apr
9
09
2013
10:00 AM
10
10
00
AM
PDT
lastyearon
Nonsense. The practices and technologies that humans have come up with are of absolutely no use in determining what’s true. So you can take your genetic analysis, and your telescopes and calculus, and shove em you know where. Because the truth is that evolution didn’t happen, and the sun and everything revolves around the earth.
As is often the case, the logic in your nitwitted parody is so bad that your second non-sequitor cancels out the first one, causing you to stumble back on to the truth. Churchland (and Kantian Naturalist) are arguing that unreliable people do not create reliable theories, artificial institutions do. Now here you are admitting that it is, indeed, humans that develop those practices, undermining the very argument you sought to support and never really did follow. Remarkable.StephenB
April 9, 2013
April
04
Apr
9
09
2013
09:49 AM
9
09
49
AM
PDT
Plantinga is ignoring the artificial mechanisms for theory-creation and theory-evaluation embodied in the complex institutions and procedures of modern science.
Even from the naturalist perspective, institutions and practices cannot be "artificial mechanisms." They would derive from and build on the individual truth tracking mechanisms and unreliable belief systems that are shaped solely by survival instincts. An amalgamation or accumulation of unreliable beliefs contains no more truth value than a single unreliable belief. The resultant diversity of opinion would simply add confusion to the unreliability.StephenB
April 9, 2013
April
04
Apr
9
09
2013
09:20 AM
9
09
20
AM
PDT
1 2 3 4

Leave a Reply