Uncommon Descent Serving The Intelligent Design Community

Epistemology. It’s What You Know

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

BarryA’s definition of a philosopher:  A bearded guy in a tweed jacket and Birkenstocks who writes long books explaining how it is impossible to communicate through language without apparently realizing the irony of expressing that idea through, well, language. 

Seriously, I have read a lot of philosophy, and I find some of the philosophers’ ideas valuable (that is, when I can decipher them though the almost impenetrable thicket of jargon in which they are usually expressed).  In particular, epistemology (the theory of what we know and how we know it) is one of the most useful philosophical ideas for the ID – Darwinism debate.  Indeed, many of the discussions on this blog turn on questions of epistemology.  So I thought it would be helpful to give a brief overview of the subject in the ID context.  So here goes – 

Consider the following statement one often hears:  “We can be as certain that the diversity and complexity of living things arose by chance and necessity through blind watchmaker Darwinism (BWD) as we are that the earth orbits the sun.” 

To examine this statement, we must first understand what it means to “know” something, and this is where epistemology comes in.  The standard philosophical definition of knowledge is “justified true belief.”  Why not just “true belief”?  Because if we have no basis for our belief, the fact that our belief might in fact be true would be a mere coincidence.  We cannot, therefore, say we know something unless we have evidence to support our belief; in other words, the belief is justified. 

Keep in mind that our beliefs can never be justified in an absolute sense.  You have a justified belief that you are sitting at your computer reading this scintillating post.  Even though this belief is highly justified and almost certainly true, you cannot rule out that you are dreaming or that you are in the Matrix or that you have been deceived by one of Descartes’ demons.   

A corollary to the proposition that beliefs can never be absolutely justified is that justification is always relative.  Indeed, these are two ways of saying the same thing.  Thus, justification of our beliefs comes in degrees; some beliefs are more justified than others.  About some beliefs we can be all but certain they are true.  While there is some remote possibility you are in the Matrix and not actually reading this post, for all practical purposes we can discount the Matrix possibility and conclude that your belief is true.   

It is interesting to note that the Matrix idea is not new.  In the 1700’s George Berkeley (after whom the California city and university are named) proposed that an individual cannot know that an object “is.”  He can only know that he has a “perception” that there is an object.  In his “Life of Johnson” Boswell records Dr. Johnson’s response to Berkeley: 

“After we came out of the church, we stood talking for some time together of Bishop Berkeley’s ingenious sophistry to prove the nonexistence of matter, and that every thing in the universe is merely ideal.  I observed, that though we are satisfied his doctrine is not true, it is impossible to refute it. I never shall forget the alacrity with which Johnson answered, striking his foot with mighty force against a large stone, till he rebounded from it – ‘I refute it thus.’” 

At one level Boswell was right and Johnson was wrong.  As a matter of pure logic, Berkeley’s ideas are irrefutable.  Berkeley would have replied that when Johnson kicked the stone, all he could be certain of was that he had a perception in his mind that he kicked a stone.  He could not be absolutely certain that he had in fact kicked a stone.  Nevertheless, Johnson’s main point is valid.  Our sensory experience of the outside world is all we have.  If we doubt that experience, we are left in a hopeless mire of doubt and skepticism.  Therefore, while we can never be certain that Berkeley was wrong, as a practical matter, in order to live our lives and make progress in science, we can safely ignore him.   

It is beyond the scope of this post to discuss philosophical hyper-skepticism in detail.  For my present purposes, I will note that even hyper-skeptics look both ways when they cross the street.  In other words, while hyper-skepticism may be interesting to discuss in the parlor on Sunday afternoon after lunch, it is perhaps the least practically helpful idea in all of philosophy.  For the scientific enterprise (and life generally) hyper-skepticism may be dismissed with a nod.   

In summary, therefore, we can trust our sense impressions to give us generally reliable information about the world upon which to base our scientific conclusions.  For my purposes here, “sense impressions” include both direct impressions on our senses and impressions from various measuring instruments such as telescopes and microscopes.  Moreover, science has a check against conclusions based upon erroneous sense impressions.  All scientific observations must be “inter-subjectively” testable.  In other words – as the scientists who announced they had achieved cold fusion a few years ago found to their dismay – scientific conclusions are not usually accepted until other scientists replicate the results in independent experiments.   

Having slain the dragon of hyper-skepticism (or at least banished him to his cave like the bad boy he is),  we move on to the practical business of scientific discovery.  This method is familiar to most of us.  In truncated summary the model is: 

1.  Think of a question that needs to be answered.  

2.  Formulate a hypothesis to answer the question.

3.  Test the hypothesis by experiment and/or observation. 

Here is where the concept of “fact” comes in.  In philosophy, a “fact” is a state of affairs described by a true proposition.  In science we say that a “fact” is an objective and verifiable observation.  I have a hammer in my office (I don’t know why, but I really do).  Just now I picked up the hammer, held it above the floor, and dropped it.  The following is a statement of fact.  “It is a fact that Barry’s hammer fell to the floor when he dropped it.”  In science we have a epistemic hierarchy:   

1.  Facts:  The raw objective and verifiable observations.  Of the correspondence between truth and proposition, this is where we have the most confidence.  Unless I’m in the Matrix (a possibility we have decided to ignore), it cannot reasonably be disputed that my hammer really did drop to the floor. 

2.  Hypothesis:  An explanation for a phenomenon that can be tested. 

3.  Theory:  A coherent model that gives a general explanation of observed data. 

About facts, we can be certain, but our conclusions based on those facts (our theories) are less certain.  In fact, some of our most cherished beliefs can turn out to be untrue even though they were highly justified and seemed to correspond to the data perfectly.   

Ptolemy’s cosmology is a perfect example.  Ptolemy, who lived from about 83 to 161 AD, was the greatest of the ancient astronomers.  It is a modern conceit that the ancients were quaint simpletons who thought we live in a cozy little universe.  It is true that the ancients did not know as much as we do, but they were not stupid.  For example, Ptolemy knew the universe is enormous.  In the “Almagest,” his famous work on astronomy, he wrote that the earth, in relation to the distance of the fixed stars, has no appreciable size and must be treated as a mathematical point.   

Not only did Ptolemy know that we live in an immense universe, he also knew that the celestial bodies behave in certain highly predictable ways.  On a certain night of the year Orion, for example, is always in the same place in the sky.  While the stars seemed to be fixed in place, the planets seemed to wander among them (“planet” means “wanderer”).  Ptolemy combined these observations with his belief that the earth was the center of the universe and developed a system, a theory, that predicted the movements of the celestial bodies with great accuracy.   

Briefly, in Ptolemaic cosmology “deferents” are large circles centered on the Earth.  “Epicycles” are small circles the centers of which move around the circumference of a deferent.  So the sun, the moon and the planets have their own epicycles, and each epicycle in turn moves along a deferent around the earth.  This system sounds very complex, and it was.  But it provided astonishingly accurate predictions of the movements of the celestial bodies.  In Ptolemy’s “Handy Tables,” one could find all the data needed to predict the positions of the sun, moon, planets and stars and also eclipses of the sun and moon. 

Ptolemy’s system was so good that it was the basis upon which celestial predictions were made for over a thousand years.  Copernicus first published his theories in 1543.  Forty years earlier, armed only with his knowledge of Ptolemy, Columbus was able to awe the Indians on present day Jamaica by predicting the lunar eclipse of February 29, 1504. 

Importantly, note that Ptolemy’s system has every attribute of a sound scientific theory, and if the scientific method had been around in his day, scientific experiments would have supported his theory.  For example, suppose Ptolemy was interested in accounting for the observed movement of Mars across the sky.  He could have used the steps of the scientific method as follows: 

1.  Question:  What accounts for the observations of Mars’ movements across the sky. 

2.  Hypothesis:  Mars orbits a certain epicycle which in turn moves around the circumference of a certain deferent. 

3.  Observation/test:  When we look at the sky and make numerous detailed observations of Mars’ position, we see that Mars’ motion though the sky is perfectly consistent with the posited epicycle and deferent. 

4.  Conclusion:  The hypothesis is not falsified. 

5.  Theory:  This non-falsified hypothesis is consistent with the general theory that all celestial bodies move along epicycles and deferents.   

Ptolemy’s cosmology was accepted for over 1,400 years.  It began to crumble only when later observations of the celestial bodies required more and more and more adjustments to the theory so that it became staggeringly complex.  Along comes Copernicus with a judgment based upon his religious sensibilities:  Surely God would not have designed such a clunky universe.  There has to be a more elegant answer.  And motivated by his essentially aesthetic judgment, he developed a heliocentric cosmology that gradually displaced Ptolemy.   

Yet another modern conceit is that scholars in Copernicus’ and Galileo’s day rejected heliocentric cosmology for dogmatic religious reasons even though the conclusion that Copernicus’ model was superior was intuitively obvious to even the most casual observer.  This is simply not true.  Yes, religious considerations motivated opposition to Copernicus to a degree.  That cannot be denied.  Nevertheless, the conceit is false.  Sixteenth century scholars were not motivated SOLELY by religious considerations as the conceited modern would have it.  They had good SCIENTIFIC arguments to support their position.  These arguments turned out to be wrong, to be sure, but it is important to remember that they were not utterly unreasonable.   

Ptolemy was wrong, but he was not stupid.  His beliefs were justified in the sense that there was substantial evidence to support them.  He observed the celestial bodies move in certain ways; from his perspective the sun appeared to orbit the earth.  Even today we say the sun rises when we know it does no such thing.  Ptolemy’s fundamental assumption was that the earth is the center of the universe.  His assumption was not based upon dogmatic anthro-centrism.  He argued for his conclusion based on the data he observed.  Ptolemy believed that all bodies fall toward the center of the universe.  All falling objects are seen to drop toward the center of the earth.  Therefore, the earth must be the center of the universe. Ptolemy rejected the notion that earth rotates on the ground that objects thrown into the air fall back to the same place from which they were thrown, which would be impossible if the earth were rotating beneath them while they were in the air. 

But the most fundamental reason that scholars did not immediately roll over and accept Copernicus was the fact that, for all its clunkiness, Ptolemy’s system had for 1,400 years provided exceedingly accurate predictions about the movements of the celestial bodies.  They said, “The system we have accounts for the observed data exceedingly well and has done so for well over a millennium.  The burden is on you, Copernicus and Galileo, to show us why we should abandon it.”  Only in retrospect, with the advantage of 500 years of experience, do we look back on the scholars of Copernicus’ day with contempt.   

For our purposes it is important to note that for the most part, the “facts” Copernicus used to develop his theory were the same “facts” Ptolemy used to develop his.  Copernicus looked at the sky and saw the same movements of the celestial bodies Ptolemy saw.  But by the time of Copernicus there had been many additional observations, and Ptolemy had had to be tweaked again and again to account for these new observations, and Copernicus began to suspect that these tweakings were ad hoc, and perhaps the theory itself needed to be reexamined.  The death blow, of course, was Galileo’s observations – made possible by improvements in telescope technology – of the four largest moons of Jupiter.  If moons orbit around Jupiter, it is obvious that not everything orbits the earth as Ptolemy believed.   

Now what does all of this have to do with the statement under consideration:  “We can be as certain that the diversity and complexity of living things arose by chance and necessity through BWD as we are that the earth orbits the sun.” 

Once we understand basic principles of epistemology, we understand that this statement is obviously false.  Breaking the statement down we see that it combines three propositions:  (1) We know the diversity and complexity of living things arose by chance and necessity through BWD.  (2) We know the earth orbits the sun.  (3) Our knowledge of “facts” (1) and (2) is epistemically equal. 

But it takes no great perspicuity to see that statement (1) is at a wholly different epistemic level than statement (2).  Statement (2) is an objective and verifiable observation.  We have gone into space and actually observed the earth orbiting the sun.  Conversely, statement (1) has not been the subject of a direct, objective and verifiable observation.  No one has ever observed any living thing evolve into a different species.  Inescapable conclusion:  Statement (3) is false. 

Now all of this is not to say that I am certain that the diversity and complexity of living things did not arise by chance and necessity through BWD.  I am in fact not certain at all.  While I personally do not believe it, this proposition may be true.  My point is not to “disprove” Darwinism.  My point is that the debate will be much more robust if we all use proper epistemic categories.  The story of Ptolemy is a cautionary tale for those who would make statements like the one we discussed above.  There are obvious parallels between Ptolemy and Darwin. 

1.  Ptolemy was a brilliant astronomer who made countless highly detailed observations from which he developed a theory of cosmology.  Darwin was a brilliant biologist (despite the fact that he had no formal credentials in the discipline) who made countless highly detailed observations from which he developed a theory of evolution. 

2.  Ptolemy’s theory is based on a fundamental assumption:  the earth is the center of the universe around which all celestial bodies orbit.  Darwin’s theory is based upon a fundamental assumption:  chance and necessity are the only forces available to account for the diversity and complexity of life. 

3.  If Ptolemy’s fundamental assumption were correct, something like his cosmology is NECESSARILY true as a matter of logic.  If Darwin’s fundamental assumption were correct, something like his theory is NECESSARILY true as a matter of logic. 

4.  Given the information available to him, Ptolemy’s theory accounted for the data brilliantly.  Given the data available to Darwin (and indeed to all biologists through about 1950), his theory accounts for the data brilliantly.   

5.  New data was observed, and numerous ad hoc adjustments had to be made to Ptolemy’s theory.  New data arose (for example, it is now generally accepted that the fossil does not support gradualism in the way Darwin envisioned), and ad hoc adjustments to the theory have been made (e.g., punctuated equilibrium).   

6.  A new theory (heliocentrism) was proposed to compete with Ptolemy.  The new theory rejected Ptolemy’s central assumption, but Ptolemy’s defenders clung to the old theory in large part due to their metaphysical/philosophical/religious commitments and refused to give the new theory a fair evaluation.  A new theory has arisen (ID) to compete with Darwin.  The new theory rejects Darwins’s central assumption by positing that a third force (agency) may account for the data.  Darwin’s defenders cling to the old theory in large part due to their metaphysical/philosophical/religious commitments and refuse to give the new theory a fair evaluation  

7.  Ptolemy and Copernicus were attempting to develop a model that accounted for the same “facts,” i.e., the observed motions of the celestial bodies were the same for both camps.  Darwinists and ID theorists also must deal with the same “facts.”  For example, the fossil record is a fact.  Both camps have to deal with the same fossil record.  It is the interpretation of the facts, not the facts themselves that make the difference.   

8.  In the end, new technology made it possible for profound new data to be discovered that simply could not be accounted for in Ptolemy’s theory (Jupiter’s moons orbiting around that planet).  In recent years new data has been discovered (staggeringly and irreducibly complex nano-machines in the cell; extraordinarily complex specified information stored in the DNA molecule) that cannot be accounted for in Darwin’s model.  Consider:  Is the electronic microscope analogous to Galileo’s improved telescope? 

9.  Pope Urban VIII persecuted Galileo for his “heretical” ideas in opposition to Ptolemy.  High priests of an entrenched and hidebound secular orthodoxy persecute ID proponents for their “heretical” ideas in opposition to Darwinism and the philosophical materialism upon which it is based.  Consider:  Is Richard Dawkins analogous to Pope Urban VIII?  Are Dembski and Behe the new Copernicus and Galileo?   

This has been fun to write.  I hope my readers enjoy it and find it useful.

Comments
KF, Your post is too long and interesting for me to reply this morning. More later to you, perhaps in the subsequent thread... I will take issue with this one point, however, just so we're sure we can actually keep communicating rationally: We know and rely on our minds, to get tot he level we are at. So, the reliability of the minds we have is a datum, what is to be explained. It is pointless to entertain the notion that our minds are not reliable, since if it is true, we will not know it. tribune7, I will disagree with premise that there is any such thing as ID metaphysics. Those who discuss/support/research ID may have metaphysical views, by they are just tangential to ID itself. You have simply ignored my point then. ID rests on the assumption that intelligent causation is emprically distinguishable from the rest of causation, but this is not the case. Rather, it is a metaphysical speculation. This is not tangential to ID; the notion of "intelligent causation" and its detection is the very core of ID. It would, however, fall within the purview of science — not restricted to the methodology of ID — to rebut claims by materialists that the mind is merely a series of chemical reactions, Nobody has any idea how anyone could possibly ever do this, so I believe your point is moot. I think a more accurate way of saying this is: Complex works of known design exhibit CSI. In nature, only biological entities exhibit CSI. Hence, it is fair to conclude biological entities are designed. Same old attempt at semantic sleight-of-hand on the term "design" here, sorry. Please give me an operationalized definition of the term "known design" in this statement. You will find you cannot. Q, A slight disagreement with your claim is that evolutionary processes aren’t claimed to be goal driven. They are claimed to be response driven. That is, evolution doesn’t make mutations to seek a point on a fitness curve, mutations survive by how they respond to the fitness curve. It depends where you draw the boundaries around "the evolutionary process" I think. In a broader view, the goal of evolution is always to find the genotype that will reproduce most efficiently in a given niche, yes?aiguy
January 11, 2008
January
01
Jan
11
11
2008
11:40 AM
11
11
40
AM
PDT
aiguy, in 108, said " For example, you might want to say that intelligent agents must be able to learn over time by using information from the environment, and must be goal-directed. In that case, or course, evolutionary processes would be considered intelligent." A slight disagreement with your claim is that evolutionary processes aren't claimed to be goal driven. They are claimed to be response driven. That is, evolution doesn't make mutations to seek a point on a fitness curve, mutations survive by how they respond to the fitness curve.Q
January 11, 2008
January
01
Jan
11
11
2008
10:06 AM
10
10
06
AM
PDT
ID-Metaphysical states t I will disagree with premise that there is any such thing as ID metaphysics. Those who discuss/support/research ID may have metaphysical views, by they are just tangential to ID itself. It would, however, fall within the purview of science -- not restricted to the methodology of ID -- to rebut claims by materialists that the mind is merely a series of chemical reactions, ID-Scientific, in contrast, is restricted to what we can empirically demonstrate. For example, we can demonstrate that human beings (and some other animals) can design and build artifacts of complex form and function (lets call this “CSI”). And we can demonstrate that biological structures (like flagella) have CSI too. We cannot demonstrate, however, what it is that enables people and other animals to create CSI. I think a more accurate way of saying this is: Complex works of known design exhibit CSI. In nature, only biological entities exhibit CSI. Hence, it is fair to conclude biological entities are designed.tribune7
January 11, 2008
January
01
Jan
11
11
2008
06:18 AM
6
06
18
AM
PDT
5] When I write a theorem prover that creatively derives a proof that no human has ever thought of, we need not assume my system transcended materialistic determinism Theorems, AIG, are logical implications of axioms. And [another sometimes useful stylistic “bad habit,” JT] the key is the ones that the axioms cannot reach – per Godel -- but which minds can conceive and indeed use. 6] I do trust the reliabilty of the mind (usually, normally, in general, except when I don’t, like when people hallucinate or they’re confused or just have bad thinking habits). The fact that we decide to trust minds has nothing to do with materialism, or evolution. Either minds are reliable, or they are not. If they are reliable, then they are reliable whether or not evolution or materialism is true. If they are not reliable, then we are wasting our time trying to argue about anything. You have it precisely backwards, just as the above. We know and rely on our minds, to get tot he level we are at. So, the reliability of the minds we have is a datum, what is to be explained. But, evo mat, a phil that often hides under the lab coats of science, is dynamically impotent to achieve such, on grounds outlined in 106 supra. Thus, it is self-undermining and logically incoherent. 7] You make the bare claim that computers do only what programmers tell them to do, which is patently false. You claim we can show them to be non-creative, which is false. And finally you admit that human beings are finite, but fail to see that puts them in the same boat in which you put computers: For all you can show, human beings are non-creative and instinctive, merely following the programming put in our heads by our Designer. (Oh, and for all we know, our Designer is exactly that as well, as was His Designer, who happened to be fully instinctive and unintelligent, but a necessary being all the same) . . . . HUMAN BEINGS are an empirically observed fact. “AGENTS” is a philosophical concept with no operationalized definition. No theory based on the idea of “AGENTS” is empirical “Who designed the designer . . .?” First, computers do as a matter of bare fact carry out their instructions, which originate with: programmers, collectively, starting with microcode kids (only young grunt engineers can be persuaded to write something so mindless . . .). Even if the instructions make no sense: no creativity, no common sense, no sanity. They are products, not creators – as your failed example just above illustrates. By sharpest contrast, OUR FIRST, MOST DIRECT EXPERIENCE AS HUMAN BEINGS IS THAT OF SELF-CONSCIOUS AGENCY. That is how we become aware of ourselves, and of other agents in our environment, and how we intelligibly reflect upon, communicate, decide and physically act into our world. To deny this is to deny the self-evident and thus leads straight to the intellectual and moral absurdities that this thread so often illustrates. And, IMHCO, no global scientific research programme on origins or worldview core to such that cannot make room for that is credible. That means: evolutionary materialism. Next, we do know that we are contingent and thus have an origin. We live in an observed cosmos that also evidently had a beginning, one that reflects fine-tuned complex organisarion that facilitates life and entails a lot of FSCI. Thus, the simplest, most factually adequate and coherent, elegant explanation is that we are the product of an intelligent, powerful, necessary being who wanted to create life in a cosmos set up for it. Such a necessary being of course has no origin, thus no cause. You have made a category error. Further to this, we illustrate that it is possible to create [small-c sense] intelligent agents; not least by an access to creativity [as already remarked on] that transcends the instinctual. Cf this telling discussion by good old materialism-leaning prof Wiki, which I now excerpt a little of:
Instinct is the inherent disposition of a living organism toward a particular behavior. Instincts are unlearned, inherited fixed action patterns of responses or reactions to certain kinds of stimuli. [NB: See why I compare them to programmed MRAC-type control systems?] Innate emotions, which can be expressed in more flexible ways and learned patterns of responses, not instincts, form a basis for majority of responses to external stimuli in evolutionary higher species, while in case of highest evolved species both of them are overridden by actions based on cognitive processes with more or less intelligence and creativity or even trans-intellectual intuition. Examples of instinctual fixed action patterns can be observed in the behavior of animals, which perform various activities (sometimes complex) that are not based upon prior experience and do not depend on emotion or learning, such as reproduction, and feeding among insects. Other examples include animal fighting, animal courtship behavior, internal escape functions, and building of nests. Instinctual actions - in contrast to actions based on learning which is served by memory and which provides individually stored successful reactions built upon experience - have no learning curve, they are hard-wired and ready to use without learning, but do depend on maturational processes to appear.
The contrast is blatant. Also, from the above, the creation of intelligent agents such as ourselves is physically and logically possible. Just we have not come close yet. Furthermore, we have good reason as outlined supra, to infer that an intelligent agent who is a necessary being is very logically possible, and the credible root of all physical possibilities in our observed cosmos. 8] What are the criteria for agency, and how do we test, for any entity or system, whether or not the criteria is met? This is the challenge that ID must meet before we can even begin to evaluate the truth of ID as a scientific theory. For starters, kindly read my always linked, section A – a 101-level introductory summary on the points relevant to that project, and which I believe gives enough to do just what you asked and assumed has not been done -0- in fact it has been done, many times, by many people [most of them far more august than I], but this is not generally recognised in the midst of he noxious, atmosphere-poisoning, blinding fumes cast up by the burning of slander-oil soaked strawmen. I believe you will find that it begins form what we do observe and routinely use, then moves tot he issue of information as a chief marker of intelligence in action, thence to issues of intelligence, agency etc, setting up the onward biologically and cosmologically relevant cases to follow. There are handy links on the key terms clustered at the in-page table of contents. In simplest compressed essence, agents are first recognised from our own experience and their intelligence is seen from their responses to situations that transcends chance, necessity and blends thereof including mere adaptive programming by other agents. A characteristic trace of agency as opposed to chance and necessity is FSCI, which by virtue of the soft impossibility of the relevant very low relative statistical weight of relevant functional states, is beyond the reach of chance [the other major source of contingency; programming is of course known to be a species of agent action and is based on configuring FSCI and embedding it so that the relevant system responds in its context based on the built-in smarts]. We routinely see this in the case of informational signals in the presence of noise – e.g. This thread, and so to refuse to see it when we have say DNA to reckon with is selective hyperskepticism. Once that is done, we can see the relevance of Dembski's classic definition of Intelligent Design as a rather broadly relevant field of science:
intelligent design begins with a seemingly innocuous question: Can objects, even if nothing is known about how they arose, exhibit features that reliably signal the action of an intelligent cause? . . . Proponents of intelligent design, known as design theorists, purport to study such signs formally, rigorously, and scientifically. Intelligent design may therefore be defined as the science that studies signs of intelligence
I suspect that whatever else is of significance in other posts has already come up. If not kindly highlight to me . . . so I can pick up when I return for follow up. GEM of TKIkairosfocus
January 11, 2008
January
01
Jan
11
11
2008
04:04 AM
4
04
04
AM
PDT
All: I see the hot stuff has moved on to a follow-up thread. I will follow up such there. But there is still specific stuff here – as well as the matter of at least one overdue major apology, by JT, for slander. So I will comment on points here that maybe this thread will be better for: 1] AIG, 98: I think your intuitions about computer minds have been skewed by your working so close to the metal, however . . . . If I asked one of my younger colleagues to look at my core dump they would wonder if I was making an obscene proposition. (It’s all GUI debuggers now of course). The hardware is a rumor to these kids, the way my wife is aware there’s something called an “engine” that makes her car go but isn’t sure in which end of the station wagon it resides. Anyway, yes, I debug the modules with test input, but once they are working together, there is no definition of “working reliably”. And so these kids literally don't know what they are talking about, once it moves beyond the cosseted world of nicely set up GUIs (how many virtual machine levels is that as abstracted from the real world of NAND gates and capacitors to suck up power supply switch transients – I once had a system that only would accept silvered mica for that, not the usual ceramic disks; I never ever figured out why (but could easily show it) -- and digital feedback and hardware response times and clock rise/fall time effects and skew etc etc . . .? A Dozen?) . . . ! As to working reliably, I come from the school that says: no program of sufficient complextity is ever fully debugged. We just have high confidence that it will run reliably on the sort of inputs it is on our experience and testing, likely to see. BTW, that's one reason to avoid a fresh release software, much less beta testers. And for too many software companies, what they call releases should really be beta testers! [But beta testing is a freebie . . .] (From my PoV, hanging around with these escapees from perambulators is spoiling your clear view on the machines!) Actually, working at the “assembly language view” -- onlookers who need a bit of 101 handholding, that's one classic definition of computer architecture [and the one I found most useful in my work and teaching] – level [and JT, compounding my stylistic sins, I haven't a clue now where I first learned that 20+ years ago!], made me very much aware that computers are wonderful discrete state machines optimised to carry out specific instructions with great reliability, and including the ability to carry out [re-programmed branching on conditions. As to minds, I have one; I routinely interact with those who have minds. The only ones that come close to a computer in their behaviour, on my long observation, are the ones who are psychologically ill; who notoriously do the same things over and over again expecting a different result, or who notoriously are utterly logical and completely out of connect with the real world, and cannot find a way to revise their thinking and acting. (So, I can see that minds have computers [we call them brains] but can act as more, far, far more . . . except when something goes very wrong in the i/o front-office computer . . .) In short, having had to deal with both, up close and personal, I know the BIG difference -- especially when it comes to the source of the creativity and (in almost the control system sense of model-reference adaptive control) adaptiveness [far better than that anthropomorphism, “learning,” AIG!] in a computer program. 2] Did you really write a million monkeys progam using one? . . . Naw, just challenged the Darwinistas to pony up on their classic rhetorical ploy against the issue of accessing FSCI, using a gedankenexperiment:
CASE STUDY -- of Monkeys and keyboards (updated): Updating this tired C19 rhetorical counter-example used by Darwinists, take a million PC's, with floppy drives modified to spew magnetic noise across the inserted initially unformatted disks, perhaps using zener diode noise circuits or the like source of guaranteed random noise. Then once every 30 seconds for a year, run the noise circuit, and then test for a formatted disk with 500 or more bits of data in any standard PC format. We get thereby 10^12 tests per year. Continue for the lifetime of the observed cosmos, i.e. 10^25 seconds or so, giving 10^37 tests. Is it credible that we will ever get a properly formatted disk, or thence a message at this reasonable threshold of complexity by chance? [NB: The 500-bit threshold is chosen as 2^500 ~ 10^150, and because it is credible that the molecular nanotechnology of life has in it orders of magnitude more information than that, judging by the 300 - 500,000 4-state elements (equivalent to 600,000 to 1 million 2-state elements) in the DNA code of the simplest existing unicellular life forms. Also, observe that we are here putting a far more realistic threshold of accidentally generated functional complexity than we see in the often met with cases of designed genetic algorithms that carry out targetted searches, step by step promoting increments towards the target. Random walk-type searches, or searched reducible to that, in short, only "work" when the searched space is sufficiently richly -- and implausibly [Cf here Denton's telling discussion in his classic 1985 Evolution, a Theory in Crisis, ch 13] -- populated by islands of functional complexity.]
No prizes for guessing why the Darwinistas have refused to pony up, for years now. But then given the signs of serious mental challenge emanating from the hardcore Darwinista camp, maybe I should fire up the old pencil and paper then get me a few components, a roll of rosin-core solder and a hot diagonally sliced-off conical point soldering Iron (the best combi of fine tip, edge and solid hot surface there is – on 100 k++ personal welds experience). The classic iron-point Antex 15 watt being my for preference (I can only see a 25 W at good old Radio Spares, now of course simply “RS”); I can feel its heft, smell it and hear it even as I type. 3] I have spent a lifetime building demonstrations intended to impress thesis advisors, then commercial sponsors, and now to pry funding from the hands of government bureaucrats. I do not try to make my systems reliable or useful, I try to make them look intelligent, which is a very different goal. (Considering the nefarious uses that people may want AI for, the fact that my systems don’t actually perform any tasks very usefully is a salve for my conscience). So no, I’m not shooting for “instinctive” behavior that people can reliably anticipate, but rather behavior that makes sense but in surprising and novel ways. And, are you reliably hitting it? 4] these arguments are all for naught, like somebody shouting at the Wright brothers that if God had meant man to fly, he would have given them wings. First, none of them show that no algorithmic machine can attain human-like cognition . . . Funny: these are based on the core principles of statistical thermodynamics. As Yavorsky and Pinsky nicely put it – a couple of my fav Russians – in discussing the underlying reason for the reliability of the 2nd law of thermodynamics. Here's my summary, from point 4 App 1 the always linked that JT imagines is a crib from prof Dembski [hint to JT: WD's a German . . . I found this stuff by haunting the friendly local communist party bookshop, once I had discovered that it had good Russky math-sci-tech stuff – that must have given my watchers in the J'can security services (they routinely monitored all youth leaders of consequence) real headaches!]:
Yavorski and Pinski, in the textbook Physics, Vol I [MIR, USSR, 1974, pp. 279 ff.], summarise [at “introductory level”] the key implication of the macro-state and micro-state view well: as we consider a simple model of diffusion, let us think of ten white and ten black balls in two rows in a container. There is of course but one way in which there are ten whites in the top row; the balls of any one colour being for our purposes identical. But on shuffling, there are 63,504 ways to arrange five each of black and white balls in the two rows, and 6-4 distributions may occur in two ways, each with 44,100 alternatives. So, if we for the moment see the set of balls as circulating among the various different possible arrangements at random, and spending about the same time in each possible state on average, the time the system spends in any given state will be proportionate to the relative number of ways that state may be achieved. Immediately, we see that the system will gravitate towards the cluster of more evenly distributed states. In short, we have just seen that there is a natural trend of change at random, towards the more thermodynamically probable macrostates, i.e the ones with higher statistical weights. So "[b]y comparing the [thermodynamic] probabilities of two states of a thermodynamic system, we can establish at once the direction of the process that is [spontaneously] feasible in the given system. It will correspond to a transition from a less probable to a more probable state." [p. 284.] This is in effect the statistical form of the 2nd law of thermodynamics. Thus, too, the behaviour of the Clausius isolated system above is readily understood: importing d'Q of random molecular energy so far increases the number of ways energy can be distributed at micro-scale in B, that the resulting rise in B's entropy swamps the fall in A's entropy. Moreover, given that FSCI-rich micro-arrangements are relatively rare in the set of possible arrangements, we can also see why it is hard to account for the origin of such states by spontaneous processes in the scope of the observable universe. (Of course, since it is as a rule very inconvenient to work in terms of statistical weights of macrostates [i.e W], we instead move to entropy, through s = k ln W. Part of how this is done can be seen by imagining a system in which there are W ways accessible, and imagining a partition into parts 1 and 2. W = W1*W2, as for each arrangement in 1 all accessible arrangements in 2 are possible and vice versa, but it is far more convenient to have an additive measure, i.e we need to go to logs. The constant of proportionality, k, is the famous Boltzmann constant and is in effect the universal gas constant, R, on a per molecule basis, i.e we divide R by the Avogadro Number, NA, to get: k = R/NA. The two approaches to entropy, by Clausius, and Boltzmann, of course, correspond. In real-world systems of any significant scale, the relative statistical weights are usually so disproportionate, that the classical observation that entropy naturally tends to increase, is readily apparent.)
Nary a Turing machine in sight! And, not a hard logical or physical impossibility but overwhelming improbability via probabilistic resources exhaustion; in praxis tantamount to real-world reliably not going to happen. THAT's the sort of reason I pay WD attention – he is speaking to things I know from somewhere else, and on very independent grounds. And it is why I hold that minds through intelligent reasoning can see workable but practically speaking impossible to randomly search out configs and then pull together enough of a prototype to do debugging and testing to get something to work within a reasonable time. Also, it is how we can come out of “nowhere” with an utterly surprising strategic framework that changes all the rules of the game – indeed takes advantage of the old guard's being locked into the old rules as a part of its design. [ . . . ]kairosfocus
January 11, 2008
January
01
Jan
11
11
2008
04:03 AM
4
04
03
AM
PDT
Q, I mention this because intelligence is basically a property of something - like of an agent. But, mind, is considered (AFAIK) as an entity unto isself. If you consider mind to be an "entity" (or substance) unto itself, then you are a dualist by definition. Is it more than mere supposition that mind and intelligence are so intertwined? Isn’t it possible, like I think you alluded to earlier, that mind and intelligence are separable, at least to the extent tha a mind may exist without the property of intelligence? This is also a matter of interminable philosophical debate. If you ask famous arch materialist Daniel Dennett, he will answer no, without intelligence (specifically linguistic ability) there is no mind (or at least conscious awareness). Others (including me) disagree. Nobody knows. Additionally, it is necessary for the concept of intelligent design to even depend upon the notion of mind? I mean, so long as it can be demonstrated that “intelligence” occurred, wouldn’t a debate about “what is mind” be off the mark in terms of ID? I think that is a very good question. All you have to do is define what "intelligence" means in a way that we could look at some arbitrary system or process and decide if it was intelligent or not. For example, you might want to say that intelligent agents must be able to learn over time by using information from the environment, and must be goal-directed. In that case, or course, evolutionary processes would be considered intelligent.aiguy
January 11, 2008
January
01
Jan
11
11
2008
03:08 AM
3
03
08
AM
PDT
PS: FYI JT, Notice how at the turn of the 1990's -- long before I ever heard of Dembski or ID -- I used language on the point "[b] all phenomena in the universe, without residue, are determined by the working of purposeless laws acting on material objects, under the direct or indirect control of chance." And, following Lewis, Schaeffer et al, and many others indeed [but these, on long consideration of whether CSL makes sense in the end, are my own thoughts too . . .], I went to the consequences:
--> a cause is not a ground, and so . . . --> if our thoughts are wholly accounted for on a-logical cause-effect chains tracing to chance and necessity, we have an undercutting defeater for the general credibility of mind. --> Which is a patent absurdity; for we need to use minds to think even materialistic thoughts.
Nope, on nearly 20 years experience with the above summary, I don't think that committed materialists will easily surrender their views to mere evidence of absurdity, which they can always find one excuse or another to brush aside. But, I have also read Acts 17, on which I know that the one who was laughed out of court in the Areopagus in AD 50 in the end prevailed, and in so prevailing became the real father of western civilisation as we know it. Indeed, I am told that the speech is now at the foot of the hill as a bronze plaque -- the ever so telling altar to the unknown god having long since crumbled into dust; and, that the street by the hill bears the name of a certain Bishop Dionysius. That is also in part why my son, the budding LKF [currently he hopes to be a physicist and has the mind for it], bears the name of that Apostle; and it is why I intend to equip him with the power of Acts 17, DV; so soon as he is able to handle it. (Which is looking like real soon now . . .) Thanks, Trib: you are right. But, equally, we need to dig in for a terrible, grinding multi- generational cultural struggle for hearts, minds and souls of men. God, give us strength for it. GEM of TKIkairosfocus
January 10, 2008
January
01
Jan
10
10
2008
09:14 PM
9
09
14
PM
PDT
All: As an interim response on what now seems to be close to the heart of the issue on knowledge and credibility of mind, I will now post here a summary argument -- tracing to works by C S Lewis, Francis Schaeffer, etc etc [and originally dating to the turn of the 1990's] on the self-referential self-defeating nature of evolutionary materialism once it tries to account for mind:
[evolutionary] materialism . . . argues that [a] the cosmos is the product of chance interactions of matter and energy, within the constraint of the laws of nature. Therefore, [b] all phenomena in the universe, without residue, are determined by the working of purposeless laws acting on material objects, under the direct or indirect control of chance. But [c] human thought, clearly a phenomenon in the universe, must now fit into this picture. Thus, [d] what we subjectively experience as "thoughts" and "conclusions" can only be understood materialistically as unintended by-products of the natural forces which cause and control the electro-chemical events going on in neural networks in our brains. (These forces are viewed as ultimately physical, but are taken to be partly mediated through a complex pattern of genetic inheritance and psycho-social conditioning, within the framework of human culture.) Therefore, [e] if materialism is true, the "thoughts" we have and the "conclusions" we reach, without residue, are produced and controlled by forces that are irrelevant to purpose, truth, or validity. Of course, the conclusions of such arguments may still happen to be true, by lucky coincidence — but we have no rational grounds for relying on the “reasoning” that has led us to feel that we have “proved” them. And, if our materialist friends then say: “But, we can always apply scientific tests, through observation, experiment and measurement,” then we must note that to demonstrate that such tests provide empirical support to their theories requires the use of the very process of reasoning which they have discredited! Thus, [f] evolutionary materialism reduces reason itself to the status of illusion. [g] But, immediately, that includes “Materialism.” For instance, Marxists commonly deride opponents for their “bourgeois class conditioning” — but what of the effect of their own class origins? Freudians frequently dismiss qualms about their loosening of moral restraints by alluding to the impact of strict potty training on their “up-tight” critics — but doesn’t this cut both ways? And, should we not simply ask a Behaviourist whether s/he is simply another operantly conditioned rat trapped in the cosmic maze? In the end, [h] materialism is based on self-defeating logic, and only survives because people often fail (or, sometimes, refuse) to think through just what their beliefs really mean. As a further consequence, [i] materialism can have no basis, other than arbitrary or whimsical choice and balances of power in the community [that is, might makes "right"], for determining what is to be accepted as True or False, Good or Evil. So, [j] Morality, Truth, Meaning, and, at length, Man, are dead . . .
This is of course the remark at 48 - 49 in the Charles Darwin thread of Aug 20 2007 that set off a very interesting chain that IMHCO, in the end only underscored the force of the point. Later on I will come back, DV and mop up a few points. I note this thread has set off a follow up thread. GEM of TKIkairosfocus
January 10, 2008
January
01
Jan
10
10
2008
08:44 PM
8
08
44
PM
PDT
Aiguy, I realize you were discussing the issues of "mind" that have been brought up, and framed them in terms of the methaphysical and the scientific. But, it raises the question, at least to me, as to whether the comments about the scientific model suggest that intelligence and mind are synonomous. Or, at least two sides of the same coin. I mention this because intelligence is basically a property of something - like of an agent. But, mind, is considered (AFAIK) as an entity unto isself. It seems like they are often bantered about interchangably, as you did with the comment "In science, then, when we refer to “mind”, we are not speaking of res cogitans or any other dualistic or metaphysical conception of mental substance. Rather, we are simply referring to our mental abilities themselves. What mental abilities would these be? As it turns out, there is no agreement at all among scientists on which abilities are definitively required in order to be considered intelligent" (emphasis added) Is it more than mere supposition that mind and intelligence are so intertwined? Isn't it possible, like I think you alluded to earlier, that mind and intelligence are separable, at least to the extent tha a mind may exist without the property of intelligence? Additionally, it is necessary for the concept of intelligent design to even depend upon the notion of mind? I mean, so long as it can be demonstrated that "intelligence" occurred, wouldn't a debate about "what is mind" be off the mark in terms of ID?Q
January 10, 2008
January
01
Jan
10
10
2008
08:08 PM
8
08
08
PM
PDT
Thanks, StephenB. To onlookers, I'll summarize the argument I've made to StephenB (and KF) here. (Actually, it is always my argument to everybody; I'm rather a one-trick pony in these debates). It is currently a matter of philosophical speculation, rather than a scientific result, that a "mind" is nothing but the functioning of the brain. Just the same, it is nothing but philosophical speculation that a "mind" is anything but the functioning of the brain. In other words, the truth or falisity of metaphysical dualism can not currently be evaluated by appeal to empirical evidence. Now, consider two versions of ID theory, called ID-Metaphysical and ID-Scientific. ID-Metaphysical states that a) mind is a type of fundamental thing (a substance or force or cause or ...), different from physical things, and that b) this type of thing is responsible both for humans' mental abilities and for the creation of life. This is an ancient and meaningful proposition, but it cannot be evaluated scientifically. ID-Scientific, in contrast, is restricted to what we can empirically demonstrate. For example, we can demonstrate that human beings (and some other animals) can design and build artifacts of complex form and function (lets call this "CSI"). And we can demonstrate that biological structures (like flagella) have CSI too. We cannot demonstrate, however, what it is that enables people and other animals to create CSI. In science, then, when we refer to "mind", we are not speaking of res cogitans or any other dualistic or metaphysical conception of mental substance. Rather, we are simply referring to our mental abilities themselves. What mental abilities would these be? As it turns out, there is no agreement at all among scientists on which abilities are definitively required in order to be considered intelligent (learning? grammatical language? self-awareness?), but let's accept arguendo that these abilities include the ability to create CSI. ID-Scientific, then, states that a) mind is the name for abilities such as CSI creation, and that b) there exists some entity with a mind that was responsible for creating the CSI we see in biological structures. In other words, the central claim of ID-Scientific is that The CSI in living things was created by something that had the ability to create CSI. Hopefully you can see that when we look closely, ID-Scientific is not a helpful theory, because it doesn't say anything at all. ID-Scientific is vacuous, and ID-Metaphysical is... metaphysical. So there is no scientifically useful theory of ID - unless somebody comes up with another version of ID theory.aiguy
January 10, 2008
January
01
Jan
10
10
2008
04:18 PM
4
04
18
PM
PDT
Auguy: I would like to respond to your comment, but unfortunately, a compelling personal issue has taken me away from the internet. It would not be fair for me to offer my remarks at this time without giving you a chance to respond. Best wishes.StephenB
January 10, 2008
January
01
Jan
10
10
2008
02:49 PM
2
02
49
PM
PDT
KF -- I guess it is easier to attack the man than to deal with the issue It would be easier to whistle Yankee Doodle while standing on your tongue than to make a reasoned case for this issue :-) Our side has won the intellectual part of the debate, KF. All that's left now is the screaming.tribune7
January 10, 2008
January
01
Jan
10
10
2008
12:46 PM
12
12
46
PM
PDT
PS: I see just now in 99 by JT -- on one point of substance that I cannot but speak to immediately -- the sad twisting of the example of the falling die that illustrates how agents, chance and necessity can all be independently at work in a real-world situation, into the utterly unwarranted inference that I claimed the tossing of a die exhibits FSCI! Namely (and note my habit of attribution . . .):
[JT, 99:] So a cubic shape with some dots painted on each side is so incredibly complex that a mechanism could not come up with. O.K. Dice have been made like this for thousands of years. (The original dice were the joints from animal bones, BTW, which the ancients tossed to discern the will of God, thus the phrase “throwin dem bones”). Over time they became a cube, but if a human today decides to use dice, he’s just deciding to use something that already exists, instead of reinventing the wheel just to prove he’s an intelligent agent. Furthermore, the thing that is actually making the dice most definitely is a machine. So, you’re calling a purely imitative act - “I’m going to make my dice just like everyone else does”- some indeliable mark of supernatural nonmechanistic “agency”.
Not so at all: yet another red herring leading out to a slander-oil soaked strawman to be ignited to distract from the real issue. Onlookers, I actually do use dice in a case on FSCI. Kindly scroll back to 38 above or the always linked and see where I do claim something about FSCI using dice and its likely source -- something JT would have seen if he had simply read with attention before superciliously accusing:
Sub-case study: a hypothetical, dice-based information system: If one were so inclined, s/he could define a six-state code and use a digital string of dice to store or communicate a message by setting each die in turn to the required functional value for communicating the message. In principle, we could then develop information-processing and communication systems that use dice as the data-storage and transmission elements; rather like the underlying two-state [binary] digital code-strings used for this web page. So also, since 6^193 ~ 10^150, if a functional code-string using dice requires significantly more than 193 to 386 six-state elements [we can conveniently round this up to 200 - 400], it would be beyond the edge of chance as can be specified by the Dembski universal probability bound, UPB. [That is, the probabilistic resources of the observed universe would be mostllikely fruitlessly exhausted if a random-walk search starting from an arbitrary initial point in the configuration space were to be tasked to find an "island" of functionality: not all "lotteries" are winnable (and those that are, are designed to be winnable but profitable for their owners). So, if we were to then see a code-bearing, functionally meaningful string of say 500 dice, it would be most reasonable to infer that this string was arranged by an agent, rather than to assume it came about because someone tossed a box of dice and got really lucky! (Actually, this count is rather conservative, because the specification of the code, algorithms and required executing machinery are further -- rather large -- increments of organised, purposeful complexity.)]
Notice: inference to AN agent, and nary a word about the supernatural. If you want to see my reasoning behind that case study, kindly read here, especially the case study here. In short the commentator known as JT is now spinning madly, to distract attention from the material points in the thread, and from his slanders. Sad, and one for prayer as Kairos reminded me so gently but pointedly the other day. Thanks again Kairos! I will return later, DV, to address points of merit. Meanwhile I have real life to go back to. GEM of TKIkairosfocus
January 10, 2008
January
01
Jan
10
10
2008
12:32 PM
12
12
32
PM
PDT
Re Junkyard Tornado: Sadly, I see I must now make another formal complaint on trollish behaviour at UD, within a few days! I guess it is easier to attack the man than to deal with the issue -- here, epistemology -- on the merits. And of course a personal attack conveniently distracts attention from the main issue, which is what I addressed in the first case at 38 above. DV, I will address further points on the merits, later. However, so serious is the sort of accusation made, that I find it important to note to the relevant onlookers on what it is and why it is false. I will therefore reply on several points, for I see that this commenter insists of perpetrating a damaging falsehood -- which he (if he were as innocent, open-minded and naive as he pretends) should know straight out and by direct protest and absence of cases in point, is patently false:
. . . Furthermore, it was my honest impression, before I realized the piece was written by you, that the writer had merely lifted huge sections from Dembski unattributed with only minor variations. It was an off-handed remark I made, I will admit, but I saw no reason to revise my comments once I recognized it was by you.
1] As onlookers will see from my previous protest, JT began by making the following false accusations, in 48, to which I replied in 78:
JT, 48: It is from this which kairosfocus just requested I read. Incidently kairos, why would this fifty page screed not have any author’s name attached as if it were immutable truth handed down from on high or something. Oops. I guess you wrote it. If I’m not mistaken, there are several passages you’ve taken directly from Dembski unattributed.
2] Now, immediately, to accuse one of using academic work in an intellectual context without attribution is plainly to accuse of plagiarism, so the attempted evasion excerpted above, that appears now that I protested that JT has unwarrantedly accused me of plagiarism, is exposed as – this is the only word for it, sorry to say -- a lie. [And BTW, “plagiarism detectors” that do not pick up links ands citations will give suitably misleading conclusions. I do cite WD several times and with attribution, sir. He is a source but not the only one nor is he the basis for my thinking -- which comes straight out of my own pure-applied physics background, and even cites one of my old favourite textbooks in extenso at the critical point of departure on defining information. At least two more old favourites of mine come in for mention too.] 3] My initial response in 78 also addressed the accusation of arrogance on my part, and explained why I have now begun to use my initials and links onward to my contact and identity – a precaution that JT's misbehaviour fully warrants. For, the very always linked page is headed and footed in part as follows [as I cited in 78]:
A Kairosfocus Briefing Note: HEAD: GEM 06:03:17; this adj. 06:12:16 - 17 to 07: 12: 13a.3.1 and 30 FOOT: . . . [NB: Because of abuse of my given name in blog commentary threads, I have deleted my given name from this page, and invite serious and responsible interlocutors to use my email contact below to communicate with me.] This page has been . . . revised and developed, to date; so far, to clean up the clarity and flow of the argument, which is admittedly a difficult one, and add to the substance especially as key references are discovered one by one such as the recent Shapiro article in Sci Am . . . (DISCLAIMER: While reasonable attempts have been made to provide accurate, fair and informative materials for use in training, no claim is made for absolute truth, and corrections based on factual errors and/or gaps or inconsistencies in reasoning, etc., or typos, are welcome.)
3] Now, a look at the actual note will reveal that the structure of my argument is very different from that of WD, and begins from very different points of departure and references, e.g. F R Connor on information theory and Harry S Robertson and even Brillouin on the informational school of statistical thermodynamics. Not to mention things like the Atanasoff-Berry Computer, Sir Fred Hoyle and Crick and Watson et al. 4] Namely, I start from the concept of information and communication system functional information as an increasingly recognised key component of the cosmos to go with space-time and matter-energy. I use my own model of the t'comms system that my former students will instantly recognise from their old T'comms Syss lecture notes – a version on the Shannon model that emphasises the code-decode [or mod-demod] aspects. (In the classes I used the idea of analogous physical and mathematical operators –learned from my favourite Russians -- to get to the electronics and math of modulation and demodulation and on to digital comms models, and of course the layercake ISO model fits right in. That's my style: a key introductory case study unfolding along a learning spiral into the structured, integrated riches of the field, as will be instantly recognisable by any of my former students. Who will in many cases remember their chant: “More work, sir; more work sir!”) In that context, I develop the concept of functionally specified complex information [which makes more sense to me than the more general complex specifed information, as say Atom will testify], which in Appendix 3 you will see has roots in Orgel and other 70's – 80's OOL researchers, the source on this being through Thaxton et al. I then point out that the very inference to message in the presence of noise is an inference to design or agency. Even my multiclause long sentence – BTW, confession is good for the soul: bad – habit (and note how fond I am of double-dashes, bullet points and brackets – the last as my HS English teachers despaired of) is a further characteristic of my writing when I don't have time or inclination to edit down into chunked short sentences. 5] The idea of a default inference to lucky noise – not exactly a WD term so far as I know -- as the source of such FSCI, is intuitively and easily discarded in comms theory due to t'comms system functionality as specification; multiplied by the vastness of the associated relevant configuration space [a term rooted in the idea of phase space but leaving off the issues of motion] and the difficulty of a random walk based search process accessing such islands of functionality. I give details through a microjets assembly thought experiment that pulls the late gfreat Sir Fred Hoyle's tornado in a junkyard – IRONICALLY -- down to quasi-molecular scale so that statistical physics principles can be used. was worked out live in debate with Pixie, a former commenter at UD, as is linked from Appendix 1, section 6. In that context, the insistence on lucky noise as default on DNA etc is seen easily as selective hyperskepticism. Thus, I come at the Dembski design filter from a different direction, one rooted in my own intellectual background. 6] In that general context -- and this is where JT came in to make objections I believe and in so doing sought to use red herrings leading out to convenient slander-oil soaked strawmen he is now trying to set afire -- I addressed the issue of the three sources of cause; which by the way traced at least to Plato's The Laws, Book X, which I link and cite in Appendix 2. I showed through the simple thought experiment example of a tumbling die [first developed by me in debates in another blog a couple of years back] – one that if WD uses I have never seen -- I have cited twice already in this thread, that they may all be independently at work in the same situation, and so the explanation to the one is not reductive relative to the inference to the other two:
CASE STUDY ON CAUSAL FORCES/FACTORS -- A Tumbling Die: For instance, heavy objects tend to fall under the natural regularity we call gravity. If the object is a die, the face that ends up on the top from the set {1, 2, 3, 4, 5, 6} is for practical purposes a matter of chance. But, if the die is cast as part of a game, the results are as much a product of agency as of natural regularity and chance. Indeed, the agents in question are taking advantage of natural regularities and chance to achieve their purposes! This concrete, familiar illustration should suffice to show that the three causal factors approach is not at all arbitrary or dubious -- as some are tempted to imagine or assert.
7] Once I showed that FSCI is a reliable sign of agency at work, not chance and/or natural regularities tracing to mechanical necessity, I then addressed three key cases: Origin of life, origin of body-plan level biodiversity and the origin of the fine-tuned organised complexity of the cosmos as a whole. In so doing I cite several sources, including WD of course, to whom I owe the phrase Organised Complexity, though I hardly thought it necessary to give a citation on a phrase that is fairly commonly used in current discussions! Beyond that I paused to deal with certain broader phil issues and in them I cite not only WD but WLC, inter alia. 8] In appendix 1, I detailed my thermodynamics reasoning, starting from my own look at Clausius' example no 1 for defining entropy, which traces to Sears and Salinger or any other basic thermodynamics textbook. I draw out that the heat importing subsystemn naturally increases its entropy and point that it takes coupling to energy converters and exhaustion of waste heat to take in energy without doing that. I then pointed out that there are cases of natural energy converters that are spontaneous, e.g hurricanes – which I raised in a discussion by correspondence with the late Henry Morris in the early 1990s. [This gentleman's willingness to take on and address over several months a multipage correspondence with an unknown out in the boonies has deeply impressed me ever since. I was later astonished when on asking about an ICR video, only requesting a quotation, I received the tape in the mail. I rushed down tot he post office to remit payment right away. I still am astonished by this generosity by the much despised YECs..] 9] I then went on to the issues linked to statistical thermodynamics, which I believe is not an approach Mr Dembski uses and is significantly different from that of Prof Sewell. 10] Finally, I see some story in the latest from JT on his constructing of a bespoke plagiarism detector to use on my online work. On the track record of what he has already done, I have no confidence in any such claimed software or its results. However, I have no doubt that such a person, hiding behind anonymity, will fell no compunction to produce a fraudulent result that “proves” that I have excerpted in extenso from WD and all sorts of people without attribution. And as the slander oil soaked strawman erupts in flames, filling the air with noxious and blinding smoke, all too many will be taken in and will fail to address the substantial issue on the merits. I therefore invite onlookers to inspect the always linked for themselves, and see that – since I write in a highly distinctive style even in informal notes and have used a significantly different approach from the circle of Mr Dembski et al as I have just outlined -- such, if it appears, will simply be further proof of his dishonesty. GEM of TKIkairosfocus
January 10, 2008
January
01
Jan
10
10
2008
12:12 PM
12
12
12
PM
PDT
kairosfocus: As to the plagiarism charge - It wasn't a charge, I didn't actually use the word "plagiarism", and my caveat was "If I'm not mistaken..." Furthermore, it was my honest impression, before I realized the piece was written by you, that the writer had merely lifted huge sections from Dembski unattributed with only minor variations. It was an off-handed remark I made, I will admit, but I saw no reason to revise my comments once I recognized it was by you. As to my delay on getting back to you regarding this, I looked for a plagiarism detector yesterday on the net, but couldn't find one fast enough, so I thought I'd whip up one on my own - just something that steps through a document one word at a time, taking the next n-word phrase and submitting it to Google. Each search phrase would be accompanied by Dembski -KairosFocus. (Also you would manually edit the document first to remove all quotes and block quotes.) Then the program just checks for the phrase "did not match any documents" in the returned google page. But I got hung up on the COM interface for the windows WebBrowser control (shdocvw.dll), so am still working the kinks out (in case someone knows of something like this that already exists - it would have to go through the entire document.) ...the need for more sophisticated comparative difficulties analyses which take into account inter alia the sort of Kantian questions addressed in the so-called irrelevant discussion on epistemology. That discussion FYI JT, was added recently precisely because of meeting with Kantianism as an objection to ID; e.g cf recent interactions with Q and others here on how inference to design necessarily requires prior assumption of the existence of relevant agents Well. my impression was that ID'ists felt that any denial regarding the existence of "intelligent agency" as defined by them, had to spring from some perverse denial of rational knowledege, or the validity of our senses, or some such, and therefore the question of epistemology arose. I don't think it does. And as I noted, your long discussion on epistemology immediately followed an assertion regarding the existence of intelligent agency being self-evident. More on the case of 10,000 coins, I showed that the macrostate is so statistically overwhelmed by the near 50-50 macrostate to the point where its occurence inteh real world is by design not chance, reliably. OK fine, this is something I never denied, the question is, is the human design process actually a mechanical process. And BTW, it is HARD to get actual random numbers on a PC. If I am not mistaken, the remark by you above as well as several by BarryA, were elicited by the following remark of mine: So if a human decides to call heads 10000 times in a row its intelligent agency. If a computer decides to call heads 10000 times in a row its not. IOW, that was my characterization of BarryA's stance. For some reason, both you and BarryA (both of you, independently,) thought I mean "random number generator" when I said computer: you: So, we can confidently say that a fair-biased H/T flip chance-programmed PC would not reach this macrostate on the gamut of the observed cosmos, it is impossible in the soft sense. If a claimed fair random throw program does deliver such a state, it was most liklely rigged or else was most likely grossly defective BarryA: I assume you mean a the computer has been programed to generate a random selection of heads and tails. It is impossible for a random number generate to call heads 10,000 times in a row. So your question is literally meaningless For the record I agree that is extremely unlikely that a random number generator would generate 10000 heads in a row. But an infinite number of other computerized functions could generate 10000 heads in a row. However programs operate according to law, which according to you isn't design. First, the trichotomy speaks to: chance [as just described in brief], necessity, agency as the three observed categories of cause at work, which as my example of the tumbling dice shows, may be all quite familiarly at work in a given situation, as was already excerpted at 38 as was the contrast of a hypothetical dice based information system: PHASE I: A Tumbling Die: For instance, heavy objects tend to fall under the natural regularity we call gravity. If the object is a die, the face that ends up on the top from the set {1, 2, 3, 4, 5, 6} is for practical purposes a matter of chance. But, if the die is cast as part of a game [and just what random search algorithm credibly came up with say Monopoly . . . ?], the results are as much a product of agency as of natural regularity and chance. [emphasis added] Indeed, the agents in question are taking advantage of natural regularities and chance to achieve their purposes! (This concrete, familiar illustration should suffice to show that the three causal factors approach is not at all arbitrary or dubious - as some are tempted to imagine or assert.) So a cubic shape with some dots painted on each side is so incredibly complex that a mechanism could not come up with. O.K. Dice have been made like this for thousands of years. (The original dice were the joints from animal bones, BTW, which the ancients tossed to discern the will of God, thus the phrase "throwin dem bones"). Over time they became a cube, but if a human today decides to use dice, he's just deciding to use something that already exists, instead of reinventing the wheel just to prove he's an intelligent agent. Furthermore, the thing that is actually making the dice most definitely is a machine. So, you're calling a purely imitative act - "I'm going to make my dice just like everyone else does"- some indeliable mark of supernatural nonmechanistic "agency". In another forum I remarked that Mount Rushmore for some reason seems to represent for ID the pinnacle of human design capability. But that as well was a purely imitative enterprise (with one additional objective - "Let's make something really really big.") Monkey see, monkey do. Computers are not reasoning, they are in principle simple, physically instantiated algorithm executing machines. Yes, we all understand that the "computer" i.e. "the Turing Machine", is an extremely simple device. But to say that that coupled with a complex enough program cannot design is merely assertion. Humans by contrast, are routinely experienced as and observed to be conscious and independent thinkers capable of not just surprising but actually creative decisions that on reflection can be seen to be rational but are utterly unexpected and can come out of “nowhere” to utterly break through a situation. That is what transformational, inspired leadership is about. ...intelligent agents are CREATIVE problem solvers, who can pull a solution that is unanticiapted by and beyond the credible reach of any random or deterministic search process, out of the thin air of the real quasi-infinite cosmos... Now we're just into the realm of poetry and rhetoric. I don't really have the desire to browbeat anyone, to force them to acknowledge the truth of my ideas by the sheer volume and insistence of my arguments. As some further background on me, and where my ideas came from - When I first had courses in the theory of Computability many years ago, I was left with the impression that the algorithm is the most systematic form of description in existence. Anything that can be described accurately in English for example can be described with an algorithm. The relevance of all this to the evolution debate became apparent to me. I took it at face value that a human or anything else could be characterized as a mechanism. It was not even controversial to me. This is of course is diametrically opposed to Dembskian thinking. However, considering everything an algorithm, you can make observations like the following: if f(x) outputs y then f(x) equates to y. Thus y cannot be more complex or improbable than f(x) is. Obvious enough, and its application to humans is immediately apparent in that the DNA (plus the cell-replication machinery) does equate to a human. But certainly some mechanism preceded that, but that must equate to a human as well. So, at some point you must reach a first extremely complex cause that has always existed. (At this point Dembski invokes a nonmaterial intelligent agent.) God is presumably extremely complex but he exists by chance. (I have stated all the above much better elsewhere on this forum in the last few days.) But anyway, I think Christians need to come to grips with the fact that there is nothing controversial about the self-evident observation that we are the output of mechanisms. Trying to demand through the pretense of science that the world accept that our God exists however seems pointless. Most people already believe in God anyway - who do you think you're going to convince - hard core athiests?JunkyardTornado
January 10, 2008
January
01
Jan
10
10
2008
10:34 AM
10
10
34
AM
PDT
KF, You're an interesting fellow, no question. I think your intuitions about computer minds have been skewed by your working so close to the metal, however. Let's see... Regarding random input - I've never bothered with a Zener card; for virtually all practical purposes in AI psuedo-rand is just fine. Did you really write a million monkeys progam using one? (WHY?) Of course, to develop these systems to the point where they work reliably, you inject controlled inputs and see if they give the expected outputs and internal states — core or internal state dumps and all. If I asked one of my younger colleagues to look at my core dump they would wonder if I was making an obscene proposition. (It's all GUI debuggers now of course). The hardware is a rumor to these kids, the way my wife is aware there's something called an "engine" that makes her car go but isn't sure in which end of the station wagon it resides. Anyway, yes, I debug the modules with test input, but once they are working together, there is no definition of "working reliably". Unpredictability by you or me does not equate to creative conscious, rational thought: “instincts” with situational learning relative to captured expertise from programmers and domain experts is more like what we are setting up here. For, we design systems to respond to their environment in reliable and useful ways, and hope that our error-trapping and recovery subroutines are good enough to catch the dangerous possible states. So, it had better be true that you determine what they do and decide their i/o responses to their environment. [BTW what is your view on the use three processor archis and equally different algors to do a vote out of three redundancy on mission-critical equipment? Mine is that those who say that stuff tends to get stiff in more or less the same way for all three at the same point of i/o behaviour or internal processing to support same, have a point.] Otherwise, get a good lawyer and some serious malpractice insurance! This is just what I mean - I think that all of your experience with building real systems that actually perform important tasks for human beings hinders your ability to see my point here. In contrast, I have spent a lifetime building demonstrations intended to impress thesis advisors, then commercial sponsors, and now to pry funding from the hands of government bureaucrats. I do not try to make my systems reliable or useful, I try to make them look intelligent, which is a very different goal. (Considering the nefarious uses that people may want AI for, the fact that my systems don't actually perform any tasks very usefully is a salve for my conscience). So no, I'm not shooting for "instinctive" behavior that people can reliably anticipate, but rather behavior that makes sense but in surprising and novel ways. Configurations in a quasi-infinite space so vast that no mere random walk based search algorithm, regardless of hill climbing elements, is likely to ever get close, on the gamut of probabilistic resources of the observed cosmos.... And, when it comes to experts and rules and crispification of summed and weighted fuzzy set membership based inputs, that’s great for relatively routine though suitably technically difficult situations. ... Further to all of this, this is where the neural networks model of mind gets you... Sorry, KF, but these arguments are all for naught, like somebody shouting at the Wright brothers that if God had meant man to fly, he would have given them wings. First, none of them show that no algorithmic machine can attain human-like cognition. But I've already told you that I don't believe human-like thought will be attained by Turing machines, so enumerating various programming techniques and complaining they don't get us to mind doesn't help here. I think what you are missing is that nobody has ever shown that brains are Turing machines. Philip Johnson: “[t]he plausibility of materialistic determinism requires that an implicit exception be made for the theorist.” This is nonsense I'm afraid. When I write a theorem prover that creatively derives a proof that no human has ever thought of, we need not assume my system transcended materialistic determinism. Better by far to accept for now that we ain’t got a clue,and trust the reliability of the mind over that of speculative materialist metaphysics under the false flag of “scientific theories” that imply that it is not reliable enough to use intelligently and reasonably! Again, I find this argument to be nonsense. I do trust the reliabilty of the mind (usually, normally, in general, except when I don't, like when people hallucinate or they're confused or just have bad thinking habits). The fact that we decide to trust minds has nothing to do with materialism, or evolution. Either minds are reliable, or they are not. If they are reliable, then they are reliable whether or not evolution or materialism is true. If they are not reliable, then we are wasting our time trying to argue about anything. I am saying that in the end, what we have is symbol/signal manipulating machines under the control of programs and their authors, in ways that we can show are non-creative, though perhaps surprising in the sense that we finite, fallible creatures with limited minds cannot anticipate before the fact. For, computers ain’t got no common sense. They will do exactly what we tell them to do, regardless of consequences. And when people do that, we call then stupid or insane, not intelligent. I fear we are not making progress here, KF. You make the bare claim that computers do only what programmers tell them to do, which is patently false. You claim we can show them to be non-creative, which is false. And finally you admit that human beings are finite, but fail to see that puts them in the same boat in which you put computers: For all you can show, human beings are non-creative and instinctive, merely following the programming put in our heads by our Designer. (Oh, and for all we know, our Designer is exactly that as well, as was His Designer, who happened to be fully instinctive and unintelligent, but a necessary being all the same). So soon as a reasonable means of powered heavier than air flight came along, it was done within a few years, by talented tinkerers First, thinking machines are harder to build than flying machines. Second, one could argue that we are still waiting for technologists to deliver "reasonable means" for achieving human-like thought: It's only been a few years that we've had many cycles and much memory to play with, and we're still riding Moore's law, so who knows what will happen. Databases, rule-based expert systems, fuzzy sets, adaptive and learning systems etc are just baby steps towards that future if we ever get the breakthrough. OK - AGAIN we agree. We do not know if we will get the breakthrough, but we do not know that we will not! And that is what undermines ID as a scientific theory. Actually, remember that the central issue in ID is that AGENTS are an empirically observed fact, e.g. consider ourselves. OK, we are finally to the heart of the matter. Please, please read this carefully: HUMAN BEINGS are an empirically observed fact. "AGENTS" is a philosophical concept with no operationalized definition. No theory based on the idea of "AGENTS" is empirical. ID pretends that human beings are one member of a class of things called intelligent agents, but nobody can say if there are any other members, because nobody has bothered to explain what the criteria are for membership. What are the criteria for agency, and how do we test, for any entity or system, whether or not the criteria is met? This is the challenge that ID must meet before we can even begin to evaluate the truth of ID as a scientific theory.aiguy
January 10, 2008
January
01
Jan
10
10
2008
09:41 AM
9
09
41
AM
PDT
StevenB, Jerry, BarryA, I think now we have a pretty good agreement about Galileo/ID analogy, so I will leave it at that. Thanks for a stimulating discussion. StevenB, we seem to be thinking alike. JPII was a great pope in many respects, although I too was surprised that he let some of the science & faith related issues slip away without capitalizing on them. Perhaps he was too preoccupied with other pressing issues to properly deal with evolution, Darwinism and Galileo, or as good a philosopher and theologian he was, perhaps he still couldn't see all the way through all the complexities that are involved. I think the current pope is much more interested in these things, so I expect that sooner or later there may be a significant movement in this area.rockyr
January 10, 2008
January
01
Jan
10
10
2008
09:07 AM
9
09
07
AM
PDT
Jerry: There is a sequel, albeit in the second empire of man 12,000 years beyond. The survivors of the first era of robotic C-Fe society spacers become in effect an agrarian cult living on an isolated planet -- I think it is Trantor long since stropped of the remnants of he first empire but I forget details now. [Come to think of it, there are other tales in which a post novel R D shows up or is mentioned, including the case of the woman who used a robot's arm to kill her husband, having lived centuries on beyond the lifetime of the human detective partner of RD who suddenly becomes an adulterous lover if memory serves.] R Daneel and a few of his friends live on and pass as humans in the post-robotic empire. He is outed privately by the new protagonist, as a high official serving the emperor and discreetly applying he laws of robotics to try to make things come out for the best. A secret robotic order in effect that serves as mankind's hidden guardian angels. hows the human hunger for eternity and for supernatural protection . . . even breaking though Asimov's atheism. CSL would have spoken of it as a hunger that points beyond our space-time world, just as Joy did. You are right that there are more Qs than As. As usual. GEM of TKIkairosfocus
January 10, 2008
January
01
Jan
10
10
2008
04:29 AM
4
04
29
AM
PDT
kairosfocus, Two things about Asimov's writings. In Foundation and Earth, R. Daneel has to make a decision and the book ends with the plight of Daneel on what to do. Asimov never wrote a sequel to this story because he did not know where to go. What was the future for man and intelligence. He then wrote pre-quels instead and died never addressing Daneel's plight. In his great short story, The Last Question, he tells of a super, super computer that has intelligence and tries to answer the question of how to reverse entropy. And the answer could be the basis for many of the discussions on this site. Both stories ask questions about the nature of intelligent life. Fun reading but maybe some insights too.jerry
January 10, 2008
January
01
Jan
10
10
2008
04:03 AM
4
04
03
AM
PDT
PS: Oh, I forgot: on The central claim of ID either hinges fully on the truth of dualism (which is indemonstrable) or it is vacuous. Actually, remember that the central issue in ID is that AGENTS are an empirically observed fact, e.g. consider ourselves. So, if we are consistent in our thinking, agents must be understood to be potential actors in many situations. Now, too, however we try to explain or reduce it, mind is an empirical, personally experienced and relied upon reality [not least by the very reductionists themselves], and one that we cannot confine to matter without running into serious question-begging and on the most popular ways to do so, absurdity that points to violation of self-evident truth. Second, we then can analyse based on that empirical datum: --> such agents -- as I discuss in my always linked that JT so abhors but has not addressed on the merits -- are known to leave FSCI etc as reliable traces of their acticity. And FSCI demands both high contingency and functional specificity that is beyond the reach of chance on the gamut of the cosmos, based on statistical thermodynamics principles of reasoning. [A config space is in effect a phase space without the movement issues. We configure many systems and as a rule their performance is sensitively dependent on their specific configuration, often with a little room for error, but not much; cf this post.] --> So, when we see FSCI, we may reasonably infer on warrant by empirically anchored inference to best explanation -- the fundamental framework of science from the perspective of epistemology -- to agency, even amidst the possibilities for chance and/or necessity as well. --> Thus, the real issue is not to beg the worldviews question when we were not there to observe the causal process directly. --> On life systems on earth, we cannot differentiate whether or no the relevant agents responsible for the FSCI of DNA etc are within or beyond the observed cosmos. And that has been explicit from the work of Thaxton et al, the very first technical level ID work, 1984. --> But also the observed cosmos as a whole is contingent [it had a beginning] and exhibits a cosmogenetic physics that manifests organised complextity that is astonishingly and on dozens of parameters, fine-tuned. So much so that the live option alternatives are a quasi-infinite array of sub-cosmi with randomly and nicely varying physics, or agency. --> BOTH are metaphysical explanations at the level of candidate necessary beings to explain the observed contingent cosmos we inhabit. So, it is obviously improper to object to the one that as it is metaphysical, it is unproved [as if we may then freely resort to the other, often presented as being "scientific" not "religious"]. There is a descriptive name for that sort of intellectual inconsistency: selective hyperskepticism, following Simon Greenleaf's telling analysis. --> instead, we should face the fact that are now in the province of comparative difficulties across worldviews options, and we should see which is more factually adequate, coherent and explanatorily elegant -- across the full spectrum of our experiences. [That includes the experience and reports of the millions who coherently testify that they have met the God who made us, and have had their lives changed for the good by it -- including some of the greatest minds of all time.] So, I think we should know enough to distinguish warrant from proof, and recognise that proof runs out of steam real fast when we deal with worldviews level questions. No surprise, proofs are relative to axioms, and worldviews are about the fundamental first plausibles in our thinking and living GEM of TKIkairosfocus
January 10, 2008
January
01
Jan
10
10
2008
03:22 AM
3
03
22
AM
PDT
Well, well, AIG: Welcome to the classic 8-bitter MPU fraternity! May the old Z80 -- a killer upgrade on the 8080 by its original design team, having walked out on Intel [following the old Fairchild tradition I suppose; doubtless aided and abetted by one of the venture cap hawks haunting the local watering holes, and making 35% ROI on rejected ideas . . .] -- never die! [I see Elenco still offers a Z80 SBC as a teaching tool. Having had volcano acids eat much of my old Heathkit 6802, I am tempted to get one.] The old 6800 and 6809 [a great upgrade that!] were my bailiwick, though. I also loved several of the 6502 family i/o devices, and in my most significant project used the 6402 UART as a voice transmission device, taking advantage of its asynchonicity. And there was the day when a student of mine who was working at a French-built radar in Jamaica showed me a schematic towards one of his projects as a final year student: as I recall, three 6800's were running the heart of that radar station. That was a heart-stopping moment of respect for the power of a good old Generation 1 8-bitter, a classic cut-down to 8 bits on the old 16 bitter PDP 11 by DEC. Oddly, DEC is now a part of Compaq and thence of HP, whose 21 was my first serious little calculator -- lost my 42S a few years back somehow, after over 10 years of great service. Looking forward to getting a 50 3-d grapher with colour and all sorts of interesting abilities, driven by a 32 bitter ARM if memory serves. And this is being typed on a HP/Compaq laptop -- never mind my discomforts with some of the corporate policies nowadays. [But I always did prefer Tektronix for 'scopes. I fondly remember a 465 -- never trusted the B though -- that served as my right arm for some years!] But, enough on nostalgia! 1] Unless one adds random input Of course, all that will then happen is that on a random input, the output will be controlled to be driven by whatever that random input wants: out of control, rather than random. And given the nasty things that can happen if the processor in question is tied to actuators with power behind them, we usually don't want that at all. We want controlled outputs, thank you. It is also very hard to get random inputs, truly random inputs. My best suggestion is a Zener running in full breakdown used as a noise source, guaranteed random by underlying thermodynamics and quantum theory. That's why I put that in the upgrade to the million monkeys on typewriters game. 2] Aside from the fact that they are far too complex for me to predict with my little brain, their behavior is determined not just by my code and the hardware, but by their interactions with their environment, which I also can’t predict. So in what sense can it be said that I determine their behavior, and decide what they shall do? Of course, to develop these systems to the point where they work reliably, you inject controlled inputs and see if they give the expected outputs and internal states -- core or internal state dumps and all. [Logic Analysers simply take that up to the next level.] Unpredictability by you or me does not equate to creative conscious, rational thought: "instincts" with situational learning relative to captured expertise from programmers and domain experts is more like what we are setting up here. For, we design systems to respond to their environment in reliable and useful ways, and hope that our error-trapping and recovery subroutines are good enough to catch the dangerous possible states. So, it had better be true that you determine what they do and decide their i/o responses to their environment. [BTW what is your view on the use three processor archis and equally different algors to do a vote out of three redundancy on mission-critical equipment? Mine is that those who say that stuff tends to get stiff in more or less the same way for all three at the same point of i/o behaviour or internal processing to support same, have a point.] Otherwise, get a good lawyer and some serious malpractice insurance! 3] I don’t know where our decisions come from, but I believe that you don’t either, and that for you to say they come out of “nowhere” is saying the same thing in a less forthcoming manner. So I shall play devil’s advocate, and take the opposing view: I say arguendo that human decisions come from our biologically determined nervous systems interacting with their environment. I used "nowhere" in the sense of one of my favourite Bible stories - one I often use in ways that your average Sunday School scholar probably never heard of! (That is as an example of breakthrough, transformational leadership and its trials and tribulations and potential.) In 1 Sam we can see how David came from a poor family on the back bushes and scrub lands, walked into the Israelite camp and turned its hopeless situation around in five minutes using a technology configuration that was low tech for even then, and risky, but high concept based on the invincibility of a refined technique backed up by surprise power to get a quick unexpected knockout win. Then 20 years later, he brought back the high tech from his sojourn among the same philistines [related to the Greeks, and having the then advanced tech of blacksmithing] and transformed Israel into the dominant power in the Levant for two generations. His successors, unfortunately, didn't know how to really build on that success, especially once we got beyond Solomon the consolidator. Not bad at all for the "wash-belly" and runt of the litter who was dismissed by his own big brother just before he took out Goliath with these words: "Why have you come down here? And with whom did you leave those few sheep in the desert? I know how conceited you are and how wicked your heart is; you came down only to watch the battle." [1 Sam 17:28.] So, as the context of my remark indicates, it is the power of mind to conceive and imagine then figure out novel configurations and integrated clusters that should be able to work in the real world, then organise their implementation that I was speaking of. Configurations in a quasi-infinite space so vast that no mere random walk based search algorithm, regardless of hill climbing elements, is likely to ever get close, on the gamut of probabilistic resources of the observed cosmos. [Cf my discussion of nanobots and microjets as a relevant thought experiment here. BTW, how close to doing this are we now?] And, when it comes to experts and rules and crispification of summed and weighted fuzzy set membership based inputs, that's great for relatively routine though suitably technically difficult situations. But when we talk about transformational leadership that breaks the old rules and changes the game totally -- as that 16 year old boy with lions and bears for breakfast who thought giants for lunch was no problem did -- that is a totally different order of behaviour. Further to all of this, this is where the neural networks model of mind gets you:
[Sir Francis Crick:] The Astonishing Hypothesis is that "You," your joys and your sorrows, your memories and your ambitions, your sense of personal identity and free will, are in fact no more than the behavior of a vast assembly of nerve cells and their associated molecules . . . [Prof Philip Johnson:] . . . [to be consistent, Crick should be willing to preface each of his writings:] “I, Francis Crick, my opinions and my science, and even the thoughts expressed in this book, consist of nothing more than the behaviour of a vast assembly of nerve cells and their associated molecules.” . . . . “[t]he plausibility of materialistic determinism requires that an implicit exception be made for the theorist.”
Non-starter. Self-referentially incoherent and unable to credibly get to a point where we have a reasonable account of the minds on whose reliability the edifice of science itself rests. Better by far to accept for now that we ain't got a clue,and trust the reliability of the mind over that of speculative materialist metaphysics under the false flag of "scientific theories" that imply that it is not reliable enough to use intelligently and reasonably! 4] simply saying that computer behaviors are determined by programmers just doesn’t begin to address the question. I am saying that in the end, what we have is symbol/signal manipulating machines under the control of programs and their authors, in ways that we can show are non-creative, though perhaps surprising in the sense that we finite, fallible creatures with limited minds cannot anticipate before the fact. Especially, when we have debugging and troubleshooting to do in the development time! For, computers ain't got no common sense. They will do exactly what we tell them to do, regardless of consequences. And when people do that, we call then stupid or insane, not intelligent. 5] the fact that our research in the past fifty years hasn’t resulted in HAL yet is a very bad argument against strong AI. It took far longer for us to create heavier-than-air flying machines (and they said it couldn’t be done!). We knew from birds that flight could be done, and indeed we had gliders of one form or another for centuries. So soon as a reasonable means of powered heavier than air flight came along, it was done within a few years, by talented tinkerers, with the pro-grade scientists nipping at their heels. Within 12 years the tinkerers were lagging the scientists and engineers, and so "Wright" became just another name in the industry, part of Curtiss-Wright. It is no accident that the Smithsonian scientist's name comes first even though it is the Wrights who properly flew first. [BTW, the story I've heard is, that the Wrights went to High School in Jamaica, at Munro and got a push towards aviation from a physics teacher there -- the cliffside high winds are an invitation . . .] We know from ourselves that intelligent embodied creatures can be dome, but so far we have not come close to getting the breakthrough to creative imagination that then feeds the logical analysis that computers excel at. Databases, rule-based expert systems, fuzzy sets, adaptive and learning systems etc are just baby steps towards that future if we ever get the breakthrough. I'd sure like to meet R Daneel and make his acquaintance . . . GEM of TKIkairosfocus
January 10, 2008
January
01
Jan
10
10
2008
02:51 AM
2
02
51
AM
PDT
KF, (various details of computer implementation...KF displays pretty impressive knowledge of architecture!) (...and so, computer hardware is deterministic) Yes. (Unless one adds random input; let's ignore that for now). When therefore a computer “thinks” about something, and “decides” etc, what is realy going on is that the programmers, collectivley have done so. I don't know (can't possibly predict) what my systems will do, and this is the case whether or not they are operating as I intend them to. Aside from the fact that they are far too complex for me to predict with my little brain, their behavior is determined not just by my code and the hardware, but by their interactions with their environment, which I also can't predict. So in what sense can it be said that I determine their behavior, and decide what they shall do? By the very sharpest contrast, humans do make intelligent, clever, creative decisions that come out of “nowhere” and transform the world. As I've said, I don't know where our decisions come from, but I believe that you don't either, and that for you to say they come out of "nowhere" is saying the same thing in a less forthcoming manner. So I shall play devil's advocate, and take the opposing view: I say arguendo that human decisions come from our biologically determined nervous systems interacting with their environment. Ok, there's a clear disagreement; now, how do you propose we test each of our hypotheses and follow the evidence where it leads? I think we cannot. I think you will see in summary above why I do not think that computers think in any sense worth talking about. I'm sorry, KF, but if you made an argument against machine intelligence, I missed it. There are obviously a whole slew of these arguments (I assume you know them - Lucas, Fodor, Searle, Penrose, many others), but simply saying that computer behaviors are determined by programmers just doesn't begin to address the question. Perhaps in future, we will learn how to make real thinking and intelligent machines, but for now that simply ain’t there yet. Show me the creativity that comes out of “nowhere” in a quasi-infinite ideas space and solves serious problems without brute-force programming as the real wizard behind the scenes, and I will agree that they have now begun to think. But until then, colour me “unpersuaded.” And on this point we are in perfect agreement. Nobody knows how thinking works. But of course the fact that our research in the past fifty years hasn't resulted in HAL yet is a very bad argument against strong AI. It took far longer for us to create heavier-than-air flying machines (and they said it couldn't be done!). KF, you've been so entertaining here I'll add a bit of personal confession. I wrote my first inference engine in Z80 assembler in 1981. When it deduced that Socrates was indeed mortal, a shiver ran down my spine and I was lost in the same reverie that made Crick write his crappy "The Astonishing Hypothesis". By the time AI hysteria was peaking in '85 or so I was already aware that my expert systems that could pick a bottle of wine with dinner was not exactly a major step on the way to machine sentience. Since then my doubts regarding functionalism as implemented by Turing machines have solidified. Nevertheless, I believe my argument still stands: The central claim of ID either hinges fully on the truth of dualism (which is indemonstrable) or it is vacuous.aiguy
January 10, 2008
January
01
Jan
10
10
2008
12:42 AM
12
12
42
AM
PDT
PS: I don't notice any response from JT on the personal attacks he made and my response in 78. Given the force of what he said in the above, cf. 78 on 48 above, he either owes us all a serious documentation on the merits, or else a bigtime apology for what are IMHCO plainly inaccurate and slanderous personalities based on either poor research before spouting off, or else willful slander on the premise that people are unlikely to investigate before repeating a convenient dismissal statement. So, now, over to you JT. GEM of TKIkairosfocus
January 9, 2008
January
01
Jan
9
09
2008
09:05 PM
9
09
05
PM
PDT
Stephen, BarryA, Jerry and AIG: This thread is actually one of the most illuminating, both directly and indirectly. I'd like to remark on a few points, starting with Jerry's excellent bottom-line on Galileo: 1] J, 85: et’s not use Galileo as an example of the oppression of science when the so called oppresssors are champions of Galileo and science in general. Yes, it involved both science and religion but it was primarily a very bad political decision by Galileo that led to his sentence. He betrayed one of his best friends and his spiritual leader. Whether he did so out of arrogance or deliberately, he got what was coming to him. Bingo! My only question is why is it that this sort of coherent, factually balanced summary is not commonly and publicly taught, but instead inaccurate, agenda-serving spin? For instance, here is Wiki's introductory remark, having cited very laudatory comments [that for instance by silence overlook the significance of the work of Kepler]:
Galileo's championing of Copernicanism was controversial within his lifetime. The geocentric view had been dominant since the time of Aristotle, and the controversy engendered by Galileo's opposition to this view resulted in the Catholic Church's prohibiting the advocacy of heliocentrism as potentially factual, because that theory had no decisive proof and was contrary to the literal meaning of Scripture.[7] Galileo was eventually forced to recant his heliocentrism and spent the last years of his life under house arrest on orders of the Inquisition.
Now, of course, headlines and introductory remarks tend to dominate in perceptions of information, so we must ask seriously why the following details are but little reflected in the headlines, or even in the lead-up to the section on the church controversy:
[Galileo] revived his project of writing a book on the subject [of heliocentrism], encouraged by the election of Cardinal Barberini as Pope Urban VIII in 1623. Barberini was a friend and admirer of Galileo, and had opposed the condemnation of Galileo in 1616. The book, Dialogue Concerning the Two Chief World Systems, was published in 1632, with formal authorization from the Inquisition and papal permission. Pope Urban VIII personally asked Galileo to give arguments for and against heliocentrism in the book, and to be careful not to advocate heliocentrism. He made another request, that his own views on the matter be included in Galileo's book. Only the latter of those requests was fulfilled by Galileo. Whether unknowingly or deliberate, Simplicius, the defender of the Aristotelian Geocentric view in Dialogue Concerning the Two Chief World Systems, was often caught in his own errors and sometimes came across as a fool. This fact made Dialogue Concerning the Two Chief World Systems appear as an advocacy book; an attack on Aristotelian geocentrism and defense of the Copernican theory. To add insult to injury, Galileo put the words of Pope Urban VIII into the mouth of Simplicius. Most historians agree Galileo did not act out of malice and felt blindsided by the reaction to his book. However, the Pope did not take the public ridicule lightly, nor the blatant bias. Galileo had alienated one of his biggest and most powerful supporters, the Pope, and was called to Rome to defend his writings . . .
2] StephenB, 83: It is not natural to doubt one’s own mind or to wonder if there is such a thing as truth. It may be a prevalent or even dominant component of the current cultural zeitgeist, but it is not normal. Daily, I come in contact with folks who try to persuade me that we have no free will, and the irony always escapes them. FREE WILL IS A NECESSARY COMPONENT FOR PERSUADING AND BEING PERSUADED. That the point didn’t occur to them in the first place is evidence of the nature of the problem. That they don’t believe it after hearing it is evidence of the seriousness of the problem. Well said. Sadly, too many will not be persuaded that in order to rise above pre-programmed robots, we have to have real, creative minds of our own. My own thought on the matter, as has been summarised elsewhere, is that as Josiah Royce [and Trueblood] have emphasised, "Error exists" is an undeniable, knowable truth. Thus, truth exists and is knowbale, and so we must be able to access it outside the circle of ourt inner subjective worlds, however provisionally, humbly and fallibly. And, if we are capable of examining and deciding based on evidence rationally and intelligently -- in an intelligible world -- then we must be free to decide above and beyond the verious influences on which we act. Mind is self-determined and creative, in short, not just an epiphenomenon of matter in motion and evolution. But, as you say, it is ever so hard for people to trust the direct evidence of their conscious daily experience, never mind that all else we ponder rests on its general reliability. Thomas Reid was ever so right! As Wiki sums up:
Reid believed that common sense (in a special philosophical sense) is, or at least should be, at the foundation of all philosophical inquiry. He disagreed with Hume and George Berkeley, who asserted that humans do not experience matter or mind as either sensations or ideas. Reid claimed that common sense tells us that there is matter and mind . . . . He set down six axioms which he regarded as an essential basis for reasoning, all derived from "sensus communis": * That the thoughts of which I am conscious are thoughts of a being which I call myself, my mind, my person; * That those things did really happen that I distinctly remember; * That we have some degree of power over our actions, and the determination of our will; * That there is life and intelligence in our fellow men with whom we converse; * That there is a certain regard due to human testimony in matters of fact, and even to human authority in matters of opinion; * That, in the phenomena of nature, what is to be, will probably be like what has been in similar circumstances.
The way that it sums up the reactions to this is telling:
In his day and for some years into the 19th century, he was regarded as more important than David Hume. He advocated direct realism, or common sense realism, and argued strongly against the Theory of Ideas advocated by John Locke, René Descartes, and (in varying forms) nearly all Early Modern philosophers who came after them. He had a great admiration for Hume and asked him to correct the first manuscript of his (Reid's) Inquiry . . . . These axioms did not so much answer the testing problems set by David Hume and, earlier, René Descartes, as simply deny them. Contemporary philosopher Roy Sorensen writes "Reid's common sense looks like an impression left by Hume; concave where Hume is convex, convex where Hume is concave. One explanation is that common sense is reactive... Without a provocateur, common sense is faceless." His reputation waned after attacks on the Scottish School of Common Sense by Immanuel Kant and John Stuart Mill, but his was the philosophy taught in the colleges of North America, during the 19th century, and was championed by Victor Cousin, a French philosopher. Justus Buchler showed that Reid was an important influence on the American philosopher C.S. Peirce, who shared Reid's concern to revalue common sense and whose work links Reid to pragmatism. To Peirce, the closest we can get to truth in this world is a consensus of millions that something is so. Common sense is socially constructed truth, open to verification much like scientific method, and constantly evolving as evidence, perception, and practice warrant. Reid's reputation has revived in the wake of the advocacy of common sense as a philosophical method or criterion by G. E. Moore early in the 20th century, and more recently due to the attention given to Reid by contemporary philosophers, in particular those seeking to defend Christianity from philosophical attacks, such as William Alston and Alvin Plantinga
If only we were to look at and fix those little erros at the beginning . . . [BTW, SB, I think that Adler was citing Aquinas on the thousandfold point.] 3] AIG, 86: My computer systems make decisions, and can learn from and be persuaded by input (perhaps from other computer systems). We describe them in intentional, mentalistic terms, and it would be very difficult to describe them in any other way: The system doesn’t remember the answer, so it’s trying to find it but doesn’t want to use the old data, so it might decide to ask… Oh, now it’s thinks the old data is OK… it’s choosing an appropriate algorithm… Okay, AIG, have you ever had to program at machine code level and then looked at the hard-wired vs microcode versions of CPU architecture? Did you think through the associated register transfer algebra, the electronics of logic gates and effects and implications of digital state feedback in the RS flipflop, including the reason for the forbidden states? I think as you reflect on these, you will agree the following with me:
a --> In executing a machine code instruction, a typical computer, e.g. a PC, simply pulls in the machine code under clock control, feeds it into an appropriate instruction register, scans it and selects the microcode or hardwired logic to activate, which executes it, doing whatever i/o, storage and transfer or logical operations are stipulated, then sending outputs to appropriate registers. b --> This process is entirely deterministic based on input and stored bit patterns, as it has to be -- absent a breakdown. c --> To get to highler level programs, what happens is that machine code instructions are chained, especially in sequences. d --> Decision nodes in programs at this level, are based on "inspecting" flag conditions and instructions with branches on meeting or failing to meet the flag bit conditions. I can still rattle off the old 6800 flag register: HINZVC. Again, a deterministic process. e --> When it comes to a higher level yet of functionality, the H-chart plus IPO breakdown shows in cascade to the machine code module level what is again going on: the overall task is summarised, then broken down into first level stages, typically using initialisation, then IPO based on an known initial starting condition. [I forgot to mention that the rule no 1 of starting up a computer is to get it to a clean initial condition then permit controlled i/o operations. Otherwise you get into serious trouble. It is also wise to test it from time to time to see that it is still under control, and to put in error trapping and escape subroutines. While I am at it, I hate interrupt-driven processing, and think that the better way is to use a clean cycle of initialisation and monitoring of key i/o states as the old 68000 Macs did. I shudder to think what is going on inside the new Macs with the abominable segmentation and interrupt scheme that Intel has for some weird reason palmed off on us all. Keep interrupts strictly for emergencies!] f --> When therefore a computer "thinks" about something, and "decides" etc, what is realy going on is that the programmers, collectivley have done so. g --> That is, the success of the PC is based on active information fed in by its designers and programmers, at great expense. [Contrary to popular rumour, Uncle Bill does not use a million monkeys paid in peanuts and banging away at random on keyboards to do his operating systems etc. It only SEEMS that way when you get frustrated with say Vista, because the fundamental situation is a kludge on a kludge on a kludge, and the real fresh start, sadly failed. AKA OS 2. I also had hopes for the common hardware reference platform, CHIRP; and even for Java. But marketing hype and affordability in the short-term won out over technical sense EVERYTIME. H'mm, that goes back to the Dvorack [sp?] keyboard, doesn't it. How many of us know that QWERTY was designed to slow down typing to keep keys from getting jammed in an early typewriter? Even the ABCD layout used in some military equipment is faster in principle than Qwerty!]
By the very sharpest contrast, humans do make intelligent, clever, creative decisions that come out of "nowhere" and transform the world. For, the spiritual world of ideas is at least quasi-infinite but intelligible. So we can visualise, pull together disparate notions into an astonishingly coherent and cogent whole, and then test, debug and get it right, at least enough right to count for now. [I believe in evolutionary spiral forms of software development and a copy of old Pressman sits on a shelf close at hand even now. Why, the spiral is also my preferred curriculum/edu system archi model and web site model too!] For instance, on the NFL thread, I started to think about the pi = 22/7 approximation, then went to BCD coding, then saw interesting possibilities for parallels to DNA and biofunctionality, including on the limitations of bootstrap hill-climbing of Mt Improbable. [Baron von Munchhausen would appreciate the point!] 4] Perhaps you will object that computers don’t make bona-fide decisions, or have bona-fide beliefs or desires or thoughts. To argue that, we need to do some preparatory work on what makes them bona-fide, keeping in mind Drew McDermott’s famous quip: “Saying Deep Blue doesn’t think about chess is like saying an airplane doesn’t fly because it doesn’t flap its wings”. I think you will see in summary above why I do not think that computers think in any sense worth talking about. Perhaps in future, we will learn how to make real thinking and intelligent machines, but for now that simply ain't there yet. Show me the creativity that comes out of "nowhere" in a quasi-infinite ideas space and solves serious problems without brute-force programming as the real wizard behind the scenes, and I will agree that they have now begun to think. But until then, colour me "unpersuaded." indeed, here is prof Wiki [h'mm, anyone willing to write adventure stories a la Munchhausen?], that ever useful, materialism-inclining witness, on AI:
The modern definition of artificial intelligence (or AI) is "the study and design of intelligent agents" where an intelligent agent is a system that perceives its environment and takes actions which maximizes its chances of success.[1] John McCarthy, who coined the term in 1956,[2] defines it as "the science and engineering of making intelligent machines."[3] . . . . The term artificial intelligence is also used to describe a property of machines or programs: the intelligence that the system demonstrates. Among the traits that researchers hope machines will exhibit are reasoning, knowledge, planning, learning, communication, perception and the ability to move and manipulate objects.[6] General intelligence (or "strong AI") has not yet been achieved and is a long-term goal of AI research.[7] . . . . Samuel Butler first raised the possibility of "mechanical consciousness" in an article signed with the nom de plume Cellarius and headed "Darwin among the Machines", which appeared in the Christchurch, New Zealand, newspaper The Press on 13 June 1863. [10] Butler envisioned mechanical consciousness emerging by means of Darwinian Evolution, specifically by Natural selection, as a form of natural, not artificial, intelligence.
Some big questions are begged, plainly in our good old Darwinian context, and the promise -- expert systems and the like notwithstanding [BTW, whatever became of the famous 5th generation computer project of 20 years ago now, why . . . -- is not the achievement. At least, to date. Prof Wiki is enlightening:
The Fifth Generation Computer Systems project (FGCS) was an initiative by Japan's Ministry of International Trade and Industry, begun in 1982, to create a "fifth generation computer" (see history of computing hardware) which was supposed to perform much calculation utilizing massive parallelism. It was to be the end result of a massive government/industry research project in Japan during the 1980s. It aimed to create an "epoch-making computer" with supercomputer-like performance and usable artificial intelligence capabilities . . . . Opinions about its outcome are divided: Either it was a complete disaster, or it was ahead of its time. . . . . the project found that the promises of logic programming were largely illusory, and they ran into the same sorts of limitations that earlier artificial intelligence researchers had, albeit at a different scale. Repeated attempts to make the system work after changing one language feature or another simply moved the point at which the computer suddenly seemed stupid. In fact it can be said that the project "missed the point" as a whole. It was during this time that the computer industry moved from hardware to software as a primary focus.[citation needed] The Fifth Generation project never made a clean separation, feeling that, as it was in the 1970s, hardware and software were inevitably mixed. By any measure the project was an abject failure. At the end of the ten year period they had burned through over 50 billion yen and the program was terminated without having met its goals. The workstations had no appeal in a market where single-CPU systems could outrun them, the software systems never worked, and the entire concept was then made obsolete by the internet. Ironically, many of the approaches envisioned in the Fifth-Generation project, such as logic programming distributed over massive knowledgebases, are re-interpreted in current technologies. OWL, the Web Ontology Language employs several layers of logic-based knowledge representation systems, while many flavors of parallel computing proliferate, including Multi-core (computing) at the low-end and Massively parallel processing at the high end. The Fifth-Generation project was aimed at solving a problem that is only now realized by the world at large.
But we do know that it is possible to create intelligent creatures, as we are obviously contingent, thus created -- by whatever creator -- and intelligent. [And as my always linked states, if and when that happens I am fully prepared to recognise such creatures as intelligent -- I would even consider one of them as a friend. R Daneel Olivaaw [sp?] was my all time favourite sci fi character!] Thus, so far: Advantage StephenB, with due adjustment for Jerry's powerful point on Galileo. GEM of TKIkairosfocus
January 9, 2008
January
01
Jan
9
09
2008
08:48 PM
8
08
48
PM
PDT
StephenB, The "neutral" in neutral monism means that the stuff of the universe is neither mental nor physical. So no, I haven't accepted the brain and rejected the mind. But let's back up and proceed by small bits - a good idea. First let's see what we agree on; here is my guess about that. We agree that: 1) our mental images (normally, usually) correspond to real things in the world 2) our powers of ratiocination are (normally, usually) reliable with respect to the world 3) we have physical brains 4) we exhibit physical behaviors 5) we have subjective experience of conscious awareness Good so far? Assuming yes, then if we're a little more careful about with our terms, we might be able to steer past this and add 6) We agree that we have minds In short, I believe our difference on this last point is that I define "mind" by what it does, and you define it by what you suppose it to be. For the substance dualist, mind is ontologically distinct from what we perceive externally (the physical world) and supports or causes or comprises or is identical with our conscious awareness. This mind-stuff interacts with and directs our physical bodies, learns, solves problems, makes decisions, has ideas, determines (or is) personality, and so on. For the physicalist, "mind" refers to (at a higher level of abstraction) the functioning of the physical nervous system, including initiating movement, learning, solving problems, ... and so on, plus the generation of conscious awareness. And now, what I think: I have doubt that neural mechanisms (or their functional equivalents in silicone) can account for our mental abilities, and I have no idea how they can begin to account for consciousness. So, I don't know how thinking works, and I don't know what makes us conscious (why we are not "zombies"). So for me, the word "mind" means whatever can initiate movement, learn, solve problems, ... and, in human beings at least, result in consciousness. So I'm quite certain we all have minds; I just don't know what they are. And my guess is that our understanding of physics will have to change if we hope to gain an understanding, which is why I am not a physicalist (or a "materialist"). Now, what does that mean for ID? Let's look at ID's core syllogism, simplified: 1) Only minds can create CSI 2) Living things contain CSI 3) So a mind created living things Seen from your viewpoint, this makes good sense. Seen from my viewpoint, it does not: It either constitutes a scientific endorsement of an indemonstrable metaphysical postulate, or it reduces to the vacuous statement "Living things were created by something that can do things like create living things, no matter what that is". We cannot ascertain if the cause of life initiates movement, makes decisions, learns, has ideas, has a personality, and experiences conscious awareness. And the only evidence that it "solves problems" is by declaring that the creation of living things was a problem to be solved. Moreover, we have no way to know if the "problem" was "solved" the way humans figure them out or the way spiders do (presumably unconscious fixed action patterns). I guess that is already too much to count as a "small bit", sorry. I'll stop here.aiguy
January 9, 2008
January
01
Jan
9
09
2008
08:23 PM
8
08
23
PM
PDT
Would you believe that "aiguy wrotes" was an attempt to merge two tenses into one in an attempt to foster a spirit of open-mindedness? I didn't think so. Change that to "aiguy wrote"StephenB
January 9, 2008
January
01
Jan
9
09
2008
06:40 PM
6
06
40
PM
PDT
I only insist on the basic self evident truths the make rationality possible. There are a few things that we all must agree on or logic goes out the window. The list isn’t very long, and at the top we find this one proposition: [1] We have rational minds, [2] We live in a rational universe, and [3] There is a correspondence between the two. -----aiguy wrotes, "I never called any of these into doubt, and although I have some trouble with your phrasing of [2], I don’t think it precludes our rational discourse." Fair enough. Let’s take it in small bits. You have stated that you are not a dualist. Dualism, of course, posits a material and a non-material component. So, you reject the non-material component and accept only “neutral monism.” Well, that leaves us with the brain , but it rules out the mind. Yet you say that you agree with [1] we have rational minds. How does that work?StephenB
January 9, 2008
January
01
Jan
9
09
2008
06:36 PM
6
06
36
PM
PDT
StephenB, You make me sound like a dogmatic, pedantic, ideologue. I am not. Ok, I'm open to being wrong about that too. I only insist on the basic self evident truths the make rationality possible. There are a few things that we all must agree on or logic goes out the window. The list isn’t very long, and at the top we find this one proposition: [1] We have rational minds, [2] We live in a rational universe, and [3] There is a correspondence between the two. I never called any of these into doubt, and although I have some trouble with your phrasing of [2], I don't think it precludes our rational discourse. If you don’t believe that the images in the mind reflect the corresponding objects of sense experience outside of the mind, reason loses its value. ...You are not the only one that questions these things. Our entire culture has lost its confidence, and it doesn’t have to be that way. But I fully and completely believe this, i.e. I am some flavor of realist. Why did you assume otherwise? Because I am the wrong flavor? It is not natural to doubt one’s own mind or to wonder if there is such a thing as truth. It may be a prevalent or even dominant component of the current cultural zeitgeist, but it is not normal. Doesn't apply to me, thank goodness. I actually don't know anybody who thinks this. Daily, I come in contact with folks who try to persuade me that we have no free will, and the irony always escapes them. FREE WILL IS A NECESSARY COMPONENT FOR PERSUADING AND BEING PERSUADED. Ah, here we have a bit of a disagreement. Again, nothing that ought to halt our rational discourse, though. First, to show that the irony is not lost on me, here's one of my favorite philosophy jokes: Waiter: What would you like? Diner: I'm a determinist. Why don't we just wait and see? I'm sure you've read compatibilist literature, but you seem dismiss it without comment, so let me outline some basics. Of course we all have minds, and of course we all make decisions, and of course we can be affected (persuaded) by things that others say. The question is, what is going behind these events? As usual, a computer analogy makes the point most clearly: My computer systems make decisions, and can learn from and be persuaded by input (perhaps from other computer systems). We describe them in intentional, mentalistic terms, and it would be very difficult to describe them in any other way: The system doesn't remember the answer, so it's trying to find it but doesn't want to use the old data, so it might decide to ask... Oh, now it's thinks the old data is OK... it's choosing an appropriate algorithm... Perhaps you will object that computers don't make bona-fide decisions, or have bona-fide beliefs or desires or thoughts. To argue that, we need to do some preparatory work on what makes them bona-fide, keeping in mind Drew McDermott's famous quip: "Saying Deep Blue doesn't think about chess is like saying an airplane doesn't fly because it doesn't flap its wings".aiguy
January 9, 2008
January
01
Jan
9
09
2008
03:10 PM
3
03
10
PM
PDT
In the Galileo affair, Urban was the good guy and Galileo was the bad guy so BarryA's final analogy is not appropriate. The conventional wisdom for the last couple centuries would support BarryA's argument but is not accurate. The following seemed to be agreed upon. Urban was under threat from people that included Galileo's main sponsor, Ferdinando II de' Medici, the Duke of Tuscany. Urban was trying to stop a war and wouldn't support the Hapsburgs. Urban was one of Galileo better friends and supporter in his scientific work. Urban suggested that the title of his book be changed to emphasize the Ptolemy/Copernicus controversy and away from Galileo silly argument about the tides demonstrating the earth was moving. Galileo's title was a "Dialogue on the Tides" and under Urban's suggestion became "Dialogue Concerning the Two Chief World Systems" Urban personally asked Galileo to give arguments for and against heliocentrism in the book, and to be careful not to advocate heliocentrism. He made another request, that his own views on the matter be included in Galileo's book. Only the latter of those requests was fulfilled by Galileo but in a derogatory way in the mouth of a simpleton So Galileo betrays Urban under the seal of the man who is helping to depose him. Do we know for sure that Galileo did not know that Urban was under pressure from his sponsor and that maybe he did this on purpose. What would you think if you were Urban. This is hypothetical but was a person like Galileo that much out of the loop not to sense the politics of the time. Galileo also takes it upon himself on how the Church should interpret scripture during a time when the interpretation of scripture is an issue that leads to wars. Tell me how Galileo is a good guy in this scenario and Urban is a bad guy. This whole episode is about politics and has nothing to do with science or religion. For his action, Galileo was sentenced to that harsh Inquisition torture, the comfy chair (house arrest.) He continued to write while under house arrest and produce what is considered one of his finest works, "Discourses and Mathematical Demonstrations Relating to Two New Sciences." It was not published till much later in Holland as his works were banned because of his sentence. So this last is the only unfair thing that happened to Galileo and it was after his death. So let's not use Galileo as an example of the oppression of science when the so called oppresssors are champions of Galileo and science in general. Yes, it involved both science and religion but it was primarily a very bad political decision by Galileo that led to his sentence. He betrayed one of his best friends and his spiritual leader. Whether he did so out of arrogance or deliberately, he got what was coming to him.jerry
January 9, 2008
January
01
Jan
9
09
2008
02:26 PM
2
02
26
PM
PDT
-----rockyr writes, "StevenB, I have pondered your reply 62, and I am sure you mean well. Perhaps BarryA’s analogy will be effective, to a degree. The problem, as I see it, is that it will be obscuring some very crucial things that need to be clarified, because they are essential to the proper understanding about faith and reason, and about science and philosophy, the very things and errors that are causing the current confusion and problems for the ID." I totally agree with everything you say here and on the rest of your post. My categories were meant to be thought stimulators for our group only. It was a way of dramatizing the differences rather than the similarities of the two episodes. I also get the point about the possibility that the Galileo/ID analogy will only add to the confusion. This is a close call for me, and I am not that far away from your position. (I assume you are against using the analogy in any context at all.) Indeed, I reacted to a statement by John Paul II in much the same way you seem to be reacting to my comments. When the Pope apologized for the Galileo event a few years back, I kept thinking: “They are going to get the wrong impression.” Further, your point about Magisterial truth is vitally important. Most of those outside the Church are clueless about what “Papal infallibility” means. It has nothing to do with the Pope’s personal moral sensibilities or even about his opinions on most matters. It only applies to a limited set of propositions /definitions about faith and morals. So when JPII apologized, some may have taken that as an indication that infallibility is now off the table. In fact, he even used the term, “The Church made a mistake.” I almost went through the floor. Also, there is the myth that the Catholic Church is anti-science. Quite the contrary. In fact, as Rodney Stark has pointed out, it was Catholic monks who got the whole science project off the ground. Anyone who doubts this should read Starks book, “How The Catholic Church Build Western Civilization.” I admit that I am torn over this, as I hope my earlier post indicated.StephenB
January 9, 2008
January
01
Jan
9
09
2008
01:14 PM
1
01
14
PM
PDT
1 2 3 4 5

Leave a Reply