Uncommon Descent Serving The Intelligent Design Community

Human Consciousness

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

(From In the Beginning … ):

For the layman, it is the last step in evolution that is the most difficult to explain. You may be able to convince him that natural selection can explain the appearance of complicated robots, who walk the Earth and write books and build computers, but you will have a harder time convincing him that a mechanical process such as natural selection could cause those robots to become conscious. Human consciousness is in fact the biggest problem of all for Darwinism, but it is hard to say anything “scientific” about consciousness, since we don’t really know what it is, so it is also perhaps the least discussed.

Nevertheless, one way to appreciate the problem it poses for Darwinism or any other mechanical theory of evolution is to ask the question: is it possible that computers will someday experience consciousness? If you believe that a mechanical process such as natural selection could have produced consciousness once, it seems you can’t say it could never happen again, and it might happen faster now, with intelligent designers helping this time. In fact, most Darwinists probably do believe it could and will happen—not because they have a higher opinion of computers than I do: everyone knows that in their most impressive displays of “intelligence,” computers are just doing exactly what they are told to do, nothing more or less. They believe it will happen because they have a lower opinion of humans: they simply dumb down the definition of consciousness, and say that if a computer can pass a “Turing test,” and fool a human at the keyboard in the next room into thinking he is chatting with another human, then the computer has to be considered to be intelligent, or conscious. With the right software, my laptop may already be able to pass a Turing test, and convince me that I am Instant Messaging another human. If I type in “My cat died last week” and the computer responds “I am saddened by the death of your cat,” I’m pretty gullible, that might convince me that I’m talking to another human. But if I look at the software, I might find something like this:

if (verb == ‘died’)
fprintf(1,’I am saddened by the death of your %s’,noun)
end

I’m pretty sure there is more to human consciousness than this, and even if my laptop answers all my questions intelligently, I will still doubt there is “someone” inside my Intel processor who experiences the same consciousness that I do, and who is really saddened by the death of my cat, though I admit I can’t prove that there isn’t.

I really don’t know how to argue with people who believe computers could be conscious. About all I can say is: what about typewriters? Typewriters also do exactly what they are told to do, and have produced some magnificent works of literature. Do you believe that typewriters can also be conscious?

And if you don’t believe that intelligent engineers could ever cause machines to attain consciousness, how can you believe that random mutations could accomplish this?

Comments
PS: Newbie onlookers, Cf above for some of what MF -- as usual -- refuses to address on the merits.kairosfocus
September 17, 2010
September
09
Sep
17
17
2010
03:39 AM
3
03
39
AM
PDT
Onlookers: if you want a 101 level look at how first principles of right reason are foundational to reasoning, look at the WCTs here. Remember, we have already seen how rejecting the law of non-contradiction allowed some above to try to reject the self-evident truth that 2 + 2 = 4. This was done by arbitrarily injecting a contradiction in the definition of the + operator, sot hat one thing did not mean one thing in one context. That error of descent into self-contradictory absurdity should serve us as a warning of the penalty to be paid for dismissing first principles of right reason that are self-evidently true. GEM of TKIkairosfocus
September 17, 2010
September
09
Sep
17
17
2010
03:35 AM
3
03
35
AM
PDT
StephanB@261
You provide no evidence of having read a word of Adler
Why would any be required beyond the statement that I've done so? The essay was later enlarged into the book "Ten Philosophical Mistakes", which I read (the 1985 edition) before leaving high school. The memory I have of it, though, was discussing the text with a faculty member of the philosophy department a couple of years later. Its reception among people who did philosophy for a living ranged from disappointed to cringeworthy. As I continued to take philosophy classes I started understanding why. As I said, if you don't have any great interest in philosophy, the book might be better than nothing. To the good, it does go over several issues of concern to philosophers. To the bad, philosophers Adler doesn't agree with come across as making dimwitted mistakes that Adler can explain in just a few paragraphs. These aren't mistakes, and Adler does a disservice to the discipline by representing them as such. As you did read the paper I suggested, I glanced at the essay you suggested. Here's his example of a self-evident truth:
One example will suffice to make this clear -- the axiom or selfevident truth that a finite whole is greater than any of its parts. This proposition states our understanding of the relation between a finite whole and its parts. It is not a statement about the word "whole" or the word "part" but rather about our understanding of wholes and parts and their relation. All of the operative terms in the proposition are indefinable. We cannot express our understanding of a whole without reference to our understanding of its parts and our understanding that it is greater than any of its parts. We cannot express our understanding of parts without reference to our understanding of wholes and our understanding that a part is less than the whole of which it is a part.
By the time he gets to the end, I've been talked out of the idea that this concept is self-evident and instead believe that it follows from the definition of the word "whole". Is that a mistake? Perhaps, or perhaps it's just sloppy writing. To make the error more clear: I describe an object. Is it a whole? How do you know? Well, if its parts were summed and did not result in the whole, then we probably wouldn't call it a whole in the first place. Do the set of integers form a whole? Does the empty set? The computer I'm using is certainly a whole, and certainly has a number of parts, yet I'd argue that the whole is certainly far greater than the sum of those parts. So that's Adler. (I'm now wondering why you've haven't read him. The "sum of parts" example was quite early on, and far stronger than what you've come up with on your own. I'm not encouraging you to read himi, mind --- I just find it curious that there's no, as you say, "evidence" that you've read him, and some evidence against.)BarryR
September 16, 2010
September
09
Sep
16
16
2010
10:12 PM
10
10
12
PM
PDT
CH@269 Continuing...
All reasoning does rely on self evident truths, fundamentals, what we call first principles. You get rid of these, and you won’t even recognize whether anything else can or cannot “reason.” It would be like taking out your eyes to look at them—it’s impossible.
Well, I'm certainly willing to be convinced, but all the examples that have been given to me so far (with the exception of the cogito) doesn't appear to be both self-evident and universal. But your making an even stronger claim than "self-evident truth exists". You're claiming that every reasoning system, no matter how obscure, must use these truths. Yet when I look at lambda calculus, I see nothing self-evident there. So let's say I have an unfortunate brain injury that causes me to be unable to understand the cogito. In fact, this brain injury prevents me from understanding any self-evident truth (although I can perceive them if I can derive them otherwise). Someone gives me a copy of Stansifer (or I'm pretty sure they did --- I could be dreaming) and as I'm working through chapter 7, I use only the material in that chapter, what I've derived, and a few self-evident "highly-likelies" (not truths) to reason that lambda calculus can be used to derive the integers. I don't have absolute certainty that my derivation is correct, but I've checked it twice and have a high degree of confidence. Why is the above impossible? Please be specific.BarryR
September 16, 2010
September
09
Sep
16
16
2010
09:12 PM
9
09
12
PM
PDT
CH@269 In reverse order:
So yes, if you’re enumerating self-evident truths, this is probably the best place to start. It’s also probably the best place to stop.
Why?
Because beyond that point Descartes' argument fell apart, and while a lot of ink has been spilled on the question (including the redoubtable Dr. Adler), nobody has been able to improve on it. This is exactly the problem Lewis is trying to solve in De Futillitate. He telegraphs it with the phrase "We are always prevented from accepting total skepticism because it can be formulated only by making a tacit exception in favor of the thought we are thinking...". This is trivially false, and to me it meas that he either never read Descartes or (more likely) didn't read deeply. The counterexample that refutes Lewis is "I am skeptical of everything, including the correctness of my own skepticism." Descartes' formulation was much stronger: "Cogito ergo sum". To move from this point to some certain knowledge of the world, Descartes is forced to postulate a very specific God. While not compelling, it does have the virtue of clarity. Lewis is far less clear and even less convincing: 1. All knowledge depends on inference. 2. If inference is false, we can know nothing. 3. We know something (his version of "cogito". 4. Thus inference is true. To see how this falls apart, let's substitute "evolution" for "inference". 1. All knowledge depends on evolution. 2. If evolution is false, we can know nothing. 3. We know something. 4. Thus evolution is true. You might be motivated to point out that starting with "all knowledge depends on evolution" assumes evolution is true. Likewise, starting with "all knowledge depends on inference" is a statement that can only be reached if inference is true. Once you've assumed inference is true, it's pretty trivial to prove inference is true. There's also the small problem that inference has been proved for this single case, not as a general proposition. I conclude this argument was never presented to an undergraduate philosophy class in the philosophy department. They would have shredded it. (Anyone know if Lewis ever read Descartes?) You can use evolutionary epistemology to do an end-run around this problem if you're willing to give up binary certainty in exchange for a continuum of certainty, but I'll save that for another time. The Stanford Encyclopedia of Philosophy has a dense article on Descartes' epistemology as well as a much more readable article on evolutionary epistemology.BarryR
September 16, 2010
September
09
Sep
16
16
2010
08:57 PM
8
08
57
PM
PDT
A few notes on citations, courtesy of Burkhard at talk.origins:
If you can stomach Foucauldian language, you could try Paul Veyne, Did the Greeks believe in their myths?: an essay on the constitutive imagination (1988), not so much on its own but as a source for further references. He has amongst other things a quote from Etienne Pasquier (16th century historian) who send a manuscript to friends for comments, and was rebuked for having all these irritating references in it. Veyne does traces the tradition of referencing to the legal profession, but not as one might think the emergence of copyright, but that of a salaried academic community which started to criticise each other - to avoid litigation for defamation, they put in references in "controversial works" to cover their backsides. You might also find something in Robert Merton's book " On the Shoulder of Giants", he links it more to the emergence of copyright and the idea of the moral right sof the author. ***** Your correspondent seems to be more interested in the present practice than its historical origins. In that case, it is the Berne Convention you are looking for, and especially the section on "moral rights" which also includes the right to be acknowledged as an author. Common law jurisdictions were historically averse to this notion, which comes from continental European (especially German) Romanticism and the idea of the "Genius artist" who invests his heart and soul in his work (compare and contrast Basil and Dorian in the first chapter of the Picture of Dorian Gray" as representatives of the two camps). By now though, everyone is signed up to it. I vaguely remember a paper given by Stina Teilmann on this ( It’s A Wise Text That Knows Its Own Father), its on the net somewhere but probably never published in a peer reviewed journal For that, you could try Jane C. Ginsburg, The Concept of Authorship in Comparative Copyright Law, 52 DEPAUL L. REV. 1063, 1091 (2003) ***** > Berne Convention? That sounds more like copyright law. I always > thought of citations as a way of providing a service to the reader > rather than to provide copyright protection to the prior art. > > -loki Well, historically that may well have been the case - see my other two references elsethread . But Garamond's correspondent wanted a reference why everything " has to be" cited - and the Berne convention does indeed establish a right of an author to be cited (as you say, as an aspect of copyright) Since these days, promotions etc are often depending in your citation index, authors are more likely than previously to evoke convention rights.
These and other comments may be found in the thread "Citation on citation".BarryR
September 16, 2010
September
09
Sep
16
16
2010
07:39 PM
7
07
39
PM
PDT
BarryR, I don't know why I bother responding to your posts, I find them awful, in the new sense of the word. You change the subject often or intentionally evade the real crux of the subject. All reasoning does rely on self evident truths, fundamentals, what we call first principles. You get rid of these, and you won't even recognize whether anything else can or cannot "reason." It would be like taking out your eyes to look at them---it's impossible. Your thinking is very familiar and bizarre to me, one that needs the treatment of Lewis and Chesterton, which is why you had to defer to your talk.origins buddies to find an argument that wasn't an ad hominem against the men themselves.
So yes, if you’re enumerating self-evident truths, this is probably the best place to start. It’s also probably the best place to stop.
Why?Clive Hayden
September 16, 2010
September
09
Sep
16
16
2010
05:22 PM
5
05
22
PM
PDT
#265 Stephenb The misunderstandings seem only to get deeper. I wasn't trying recount any history (I did get a comment number wrong which may be the cause of the confusion). All I intended to say was - look at comments #260 and #261 - you may find them more substantial than #254 (and more substantial than simpy declaring something to be self-evident or rational)markf
September 16, 2010
September
09
Sep
16
16
2010
02:37 PM
2
02
37
PM
PDT
---markf: "I think there is a misunderstanding I was referring to my comments in response to Gpuccio i.e. #261 and #262. The only names mentioned in those comments are Wittgenstein and yours!" I was responding to your comments @254, at which time you listed five authors [later two more] and implied, wrongly, that I didn't know much about their philosophy, ending with this snippy comment: "you only have to go as far as Wikipedia to find that out." To that remark, I responded with a call for less name dropping and more substance. It was @262 that you made this comment: "I hope you will find my comments in response to Gpuccio more substantial. So, your recounting of the history is a bit off the mark. ---"Yours – ever trying to be friendly – Mark" I am always ready to establish and maintain a friendly relationship. I need the practice. [Insert smiley face].StephenB
September 16, 2010
September
09
Sep
16
16
2010
01:00 PM
1
01
00
PM
PDT
#263 I think there is a misunderstanding I was referring to my comments in response to Gpuccio i.e. #261 and #262. The only names mentioned in those comments are Wittgenstein and yours! Yours - ever trying to be friendly - Markmarkf
September 16, 2010
September
09
Sep
16
16
2010
11:01 AM
11
11
01
AM
PDT
---mark: "(They do at least go beyond asserting that my theory is correct because it is self-evident or because all rational men believe it.)" No, they don't even go that far. They simply appeal to big names and ask us to believe their doctrines on the grounds that they are famous and, one gathers, smart. Since each of these smart men posited contradictory notions of truth, and given the fact that you identify with all of them, contradictions and all ["I am happy to be in this company"] I can't imagine why you would think that your comments were substantive.StephenB
September 16, 2010
September
09
Sep
16
16
2010
09:41 AM
9
09
41
AM
PDT
#257 Stephenb I am sorry you found my comment lacking in substance. I hope you will find my comments in response to Gpuccio more substantial. (They do at least go beyond asserting that my theory is correct because it is self-evident or because all rational men believe it.)markf
September 16, 2010
September
09
Sep
16
16
2010
09:03 AM
9
09
03
AM
PDT
Gpuccio #225 cont. Broadly I agree with what you wrote on truth. I prefer to talk about statements rather than judgements - but it comes to much the same thing. Some statements simply describe some state of affairs in the world and are true if that state of affairs obtains. As you point out, other types of statement are true or false for different reasons. Mathematical statements are true if they follow from a set of explicit or implicit premisses. Logical statements if they correctly describe a type of valid inference. As I said in #254 I guess I could be described as a pluralist - although I hate "...isms" and "...ists". A really important insight comes from Wittgenstein. Often a statement will be true for a combination of different types of criteria and because they always or often coincide we never have to resolve which apply. For example, an arithmetic statement such as 2+2=4 is mathematically true because it follows from the Peano axioms (I think those are the right ones). Now the Peano axioms apply very accurately to a vast range of situations so it is also true that 2+2=4 for a vast range of situations. Unless we are pure mathematicians we are never called upon to decide whether 2+2=4 is true in the mathematical or the descriptive sense. And when challenged as to what makes it true we get confused and end up making up some abstract world of which it is true! A really powerful example of this ethical statements which are true in virtue of specific facts but also our attitude to those facts but that is a whole new can of worms. I am sorry, I seem to have practically written a paper here - but these issues are so much subtler than people realise.markf
September 16, 2010
September
09
Sep
16
16
2010
08:55 AM
8
08
55
AM
PDT
#225 Gpuccio I think you were wise to stay out of this debate. However, you are as ever polite and interested in different views so let me do my best to respond. These are quite difficult to explain so please expect false starts and hopefully minor inaccuracies in what I say. I will split this into two comments to try and prevent them becoming too long. 1) Can logic be reduced to some overriding principle and if not how do we tell what is logical? I think not. Or rather it depends on what you mean by "reduced to". I see logical principles as descriptive not prescriptive. I don't think we have to justify valid reasoning by referring to some higher generality. This approach leads to an infinite regress because of any rule we can always ask the proposer of that rule to justify it. You have to stop somewhere - so why not stop at the first level - with the actual statement you are making. Let me try to be concrete with an example. I think we all recognise that it is valid to argue something on the lines of: (A) My father was a pilot. All pilots had an eye test. Therefore, he had an eye test. But if someone was to challenge this line of reasoning it would add little to say it is an example of the general rule: (B) All As are Bs. x is an A. Therefore x is a B. It is not the fact that statement A matches rule B that makes it valid. Rather it is the other way round. I would test the validity of the general rule by seeing if it works for specific examples. We might then later on use the rule to remind us that another situation is similar to the first case. But essentially we don't need criteria or proof of the validity of a statement like A. How we tell it is valid? By imagining the situation, by drawing Venn diagrams, by failing to produce counterexamples.markf
September 16, 2010
September
09
Sep
16
16
2010
08:36 AM
8
08
36
AM
PDT
---BarryR: "Oh, I have read Adler. If you’re not very curious about philosophy, he may be better than nothing." You provide no evidence of having read a word of Adler, much less his article that I sent you to a few days ago. If you had read that small portion of Adler and comprehended it, you would likely have approached the subject matter the same way that I approached the article that you asked me to read from Edmonds. You would have, as I did, presented the argument you believe to be false, explain why it is false, and offer a better alternative. As it is, you are just going through another round of name dropping [this time with Stanifer].StephenB
September 16, 2010
September
09
Sep
16
16
2010
07:04 AM
7
07
04
AM
PDT
F/N: NWE -- as opposed to Wikipedia -- has a very helpful 101 on truth, for those trying to follow basic ideas and catch a hint of nuances. Gkairosfocus
September 16, 2010
September
09
Sep
16
16
2010
06:46 AM
6
06
46
AM
PDT
---mark: "The irrational folks include of Spinoza, Leibniz, G.W.F. Hegel,William James and Dewey. (You only have to go as far as Wikipedia to find that out." Yes, so what? With respect to truth, Hegel was totally irrational; Dewey was eminently irrational; James was monumentally irrational. Do you have anything of substance to say? ---"I am happy to be in this company. To be clear – I would say that a statement is sometimes true because it corresponds to some facts about the world – but other statements are not like this – a pluralist theory." Yes, I am sure that you are happy to be in their company, and I agree that you belong there, holding each of their contradictory notions of truth at the same time. I suspect, however, that you have read little of nothing of these men that you identify with. If so, perhaps you would like to argue on behalf of one of these irrational positions. Here is a clue: Popularity among the elitist academians is not a measure of rationality.StephenB
September 16, 2010
September
09
Sep
16
16
2010
06:35 AM
6
06
35
AM
PDT
GP, and Onlookers: A very long time ago, Aristotle, in Metaphysics 1011B, observed:
since the contradiction of a statement cannot be true at the same time of the same thing, it is obvious that contraries cannot apply at the same time to the same thing . . . . Nor indeed can there be any intermediate between contrary statements, but of one thing we must either assert or deny one thing, whatever it may be. This will be plain if we first define truth and falsehood. To say that what is is not, or that what is not is, is false; but to say that what is is, and what is not is not, is true . . .
Or, in a more morally tinged vein, we may cite the Sermon on the Mount:
Matt 5:37Simply let your 'Yes' be 'Yes,' and your 'No,' 'No'; anything beyond this comes from the evil one.
And, the prince of the prophets had something to add to this:
Isa 5:18 Woe to those who draw sin along with cords of deceit, and wickedness as with cart ropes . . . . 20 Woe to those who call evil good and good evil, who put darkness for light and light for darkness, who put bitter for sweet and sweet for bitter. 21 Woe to those who are wise in their own eyes and clever in their own sight.
In short, there is something both profound and morally significant in seeing thst the truth will say that what is, is, and it will say that what is not, is not; refusing to put light for darkness and bitter for sweet. For, one's yes ought to be yes, and one's no ought to be no. That is why we recognise the crimes of perjury, slander and libel. Now, there are of course any number of philosophers, theologians who have proposed alternative statements [most often emphasising clarity and coherence . . . which bespeaks the unity of truth]. But what is interesting is that in so asserting their own statements and arguing for them, they consistently either imply or are dependent on this underlying very simple principle: true claims say of what is that it is, and of what is not, that it is not. And, that holds even when the philosophers in question dispute the claim that truth is about correspondence of thought or statement or proposition with reality. Merely listing that certain philosophers have said something diverse from the above is not enough to overturn it. For instance, if we were to take MF's claim above and read it as saying that he agreed with Aristotle, he would for good reason think that we have uttered falsehood: we have failed to say of what is that it is, and have instead said that what is not, is. In short, if we were to ignore and dismiss the principle that truth says of what is that it is,and of what is not that it is not, and that were to become the general rule, verbal communication itself would collapse in a morass of confusion and deception, destroying society. In short, such fails the test of the Categorical Imperative. Indeed, what goes beyond letting he yes be yes and the no be no, cometh of evil. And so, to the extent that certain philosophers in recent centuries have indeed denied that truth says of what is, that it is, and of what is not, that it is not, we must therefore not flinch from saying something that is sadly true: to that extent, such have failed at the bar of the duty of truth and reason. GEM of TKIkairosfocus
September 16, 2010
September
09
Sep
16
16
2010
04:06 AM
4
04
06
AM
PDT
mark: I have rather backed out of this debate about consciousness, because frankly I am not much interested in philosophical subtleties. But just to understand, would you agree on the statement that all logical reasoning is based on some innate rules of the mind? And that, even if you reduce those rules to the minimum (maybe it is the principle of non contradiction, or something even more basic, I leave that to logician), still some fundamental rule of reasoning, universally shared, must be the basis for all logical reasoning? If it were not so, how could we ever distinguish (in a shared way) logical reasoning from illogical reasoning? Another point: would you agree that "true" and "false", at least in the most common sense, are properties of judgements? So, even if we don't want to state that "truth is the correspondence of the mind to reality", which could be a bit methaphysical, would you agree that if a mental judgement (such as "this table is round") corresponds to reality (the table is really round, and not square), then we can say that such a judgement is true? That would be an empirical meaning of truth, while there is also a logical meaning: in a logical system, I suppose that some proposition is true if it can be logically derived from the original axioms and assumed procedures. Or not? Then there is always the metaphysical meaning of truth, like in big questions such as "what is truth?". But to that question, even very important persons have not given an explicit answer, and remained in silence. I would appreciate your thoughts on those points.gpuccio
September 16, 2010
September
09
Sep
16
16
2010
03:00 AM
3
03
00
AM
PDT
markf@255 Nice cite! I had no idea that was there.BarryR
September 16, 2010
September
09
Sep
16
16
2010
01:31 AM
1
01
31
AM
PDT
Stephenb As all rational people know, truth is the correspondence of the mind to reality It’s true if it corresponds to reality. It’s the only rational definition that has ever been offered. The irrational folks include of Spinoza, Leibniz, G.W.F. Hegel,William James and Dewey. (You only have to go as far as Wikipedia to find that out) I would also add Kant and Wittgenstein - but that is more debateable. I am happy to be in this company. To be clear - I would say that a statement is sometimes true because it corresponds to some facts about the world - but other statements are not like this - a pluralist theory.markf
September 16, 2010
September
09
Sep
16
16
2010
01:15 AM
1
01
15
AM
PDT
MF: As soon as you make a distinction of meaning by asserting a proposition to describe a true state of affairs, and then proceed to infer from it a conclusion, you are -- usually implicitly [that's where the trick is onlookers] -- using the self-evident first principles of right reason, e.g. the law of non-contradiction. For instance, above, you claim:
the Stanifer example is Barry’s refutation of the statement: “All reasoning begins with the acknowledgement of self-evident truths” The Stanifer books includes a lot of reasoning. It does not begin with self-evident truths. QED.
In so asserting claimed facts, you are asserting that certain things are so, as opposed to not being so. That a certain book "does not begin with self-evident truths" is an asserted state of affairs about the world, i.e that certain books exist, that they are by a certain author, and that they argue in a particular way that does not begin with self evident truths. Eachof thsese is as opposed tothe denial of same, and to preserve meaning you imnplicitly assume that if a book exists it does not int he same sense not exist. If it is written by a given author, it is not simultaneously true that they are not written by the said author. If they do not begin with SET's it is not also just as ture that hey do begin with SET's. All of this is a manifestation of the implicit appeal to the law of non contradiction, and leads directly to the conclusion that on the contrary the books DO begin with SET's, starting with LNC. Just, it is easy to overlook an implicit appeal. That's the trick and trap. GEM of TKIkairosfocus
September 16, 2010
September
09
Sep
16
16
2010
01:14 AM
1
01
14
AM
PDT
CH@250
Wow. Just wow. Do you mean Stanifer, the person who wrote on programming languages, is your answer to StephenB that there are not self evident truths?
Not unless I really screwed up the formatting. Let's check: Nope. StephanB said this:
All reasoning begins with the acknowledgement of self-evident truths.
That's trivially false. I explained why to him. I guess you didn't read that far. Premise: All foo are bar. Observation: There exists one foo that is not bar. Conclusion: The premise is false. So far so good? Premise: All reasoning begins with the acknowledgement of self-evident truths. Observation: There exist at least three reasoning systems that do not rely on self-evident truth. Conclusion: The premise is false. In junior high school, when you said something like "All the Red Sox suck", there should have been a geeky guy with braces and glasses pouncing on you, pointing out that one particular player on the Red Sox was having a career year and thus could not be properly said to suck, and despite the near-universal suckage of the rest of the team, the statement "All the REd Sox suck" was, strictly speaking, incorrect. This teaches you the value of words like "some", "most", "several", and "the systems of reasoning of which I am aware".
Are you a human being?
Depends on who and when you ask. Being a caucasian landowning male in this culture, I haven't had to worry about the effects of my ancestors being considered three-fifths of a human being. However, in Western China I'm treated very much as a monkey who talks (just not very well). So no, being human is not self-evident.
Are you alive?
If I was undead, how would I know? (Is that why the scary organ music keeps following me around?)
Do you live in the universe?
You just don't put much thought into these exchanges, do you. Which universe are we speaking of? The geocentric universe of the ancient Hebrews and early Christians? The heliocentric universe of the Greeks and late Christians? The early scientific cosmologies that did not extend past the Milky Way? The universe pre- or post-Hubble? Pre- or post-hubble telescope? I'd say the universe isn't self-evident at all.
Do you think?
Descartes figured out "Cogito ergo sum", and despite several standard criticisms (you can look them up yourself, they're in wikipedia) he's justly famous for this. He's much less famous for what came after, but I'll save that for the Lewis essay. So yes, if you're enumerating self-evident truths, this is probably the best place to start. It's also probably the best place to stop. Had I argued that no self-evident truths exist, this would be sufficient to prove me wrong. Since I've read Descartes, I knew enough not to make that argument (although evidently I still don't know how to communicate my actual argument to you).
Your thinking is so bizarre to me
Your thinking is quite familiar to me. I used to think exactly the same way. Education is a terrible, awful thing (in the archaic senses of those words).
you apparently think the world and all of its glorious multiplicity is reducible to programming computers.
You use "apparently" in a sense I'm not familiar with. You're having so much fun sneering at points I'm not making that I hesitate to ask you to engage with what I've actually written. Eh, you're having fun. I won't insist.BarryR
September 16, 2010
September
09
Sep
16
16
2010
01:08 AM
1
01
08
AM
PDT
Onlookers: In Bk I Ch 2 of his famous The Rhetoric {4th C BC, the first major work on the subject), Aristotle aptly summed up how arguments work:
Of the modes of persuasion furnished by the spoken word there are three kinds. The first kind depends on the personal character of the speaker [ethos]; the second on putting the audience into a certain frame of mind [pathos]; the third on the proof, or apparent proof, provided by the words of the speech itself [logos]. Persuasion is achieved by the speaker's personal character when the speech is so spoken as to make us think him credible . . . Secondly, persuasion may come through the hearers, when the speech stirs their emotions. Our judgements when we are pleased and friendly are not the same as when we are pained and hostile . . . Thirdly, persuasion is effected through the speech itself when we have proved a truth or an apparent truth by means of the persuasive arguments suitable to the case in question . . . .
The point is that emotions (the most persuasive appeal) may rest on an accurate judgement, but their mere intensity says nothing about the degree of warrant. Secondly, no authority -- and appeal to peer review is appeal to the collective authority of the guild of scholars at a given place and time -- is better than his or her facts, assumptions and reasoning. So, in the end the only appeal that actually successfully warrants claims is the presentation of the material, credible facts, in light of reasonable assumptions [including first principles of right reason], and reasoning. This last may be deductive, inductive or abductive, and may appeal to different degrees of certainty: undeniable and/or self evident, demonstrative, probabilistic, moral certainty. In the above, what we have been seeing is a persistent evasive refusal to engage the actual issues on the merits [e.g. the original issue on consciousness, the arguents by CSL and GKC, even to some extent the implications of 2 + 2 = 4], by playing at the game of "cite me some recent authorities." (This, in the teeth of pointing to some serious sources, both classical and modern.) We are now, sadly, plainly looking at a case of selective hyperskepticism. Just remember, the same folks raising these arguments, have taken 2 + 2 = 4, and by injecting an arbitrary and contradictory redefinition of the + operator, have tried to justify the notion that axiomatic foundations and associated definitions and concepts are arbitrary, and so 2 + 2 = 4 is not a strictly true claim. You CAN set up axiomatic systems as you please, but as my 6th form math teacher pointed out long ago, if you inject contradictions, the math spits back at you: things fall apart in incoherence. Something that Wikipedia fails to underscore the significance of in the intro-summary for its article on Axioms:
In traditional logic, an axiom or postulate is a proposition that is not proved or demonstrated but considered to be either self-evident, or subject to necessary decision. Therefore, its truth is taken for granted, and serves as a starting point for deducing and inferring other (theory dependent) truths. In mathematics, the term axiom is used in two related but distinguishable senses: "logical axioms" and "non-logical axioms". In both senses, an axiom is any mathematical statement that serves as a starting point from which other statements are logically derived. Unlike theorems, axioms (unless redundant) cannot be derived by principles of deduction, nor are they demonstrable by mathematical proofs, simply because they are starting points; there is nothing else from which they logically follow (otherwise they would be classified as theorems). Logical axioms are usually statements that are taken to be universally true (e.g., A and B implies A), while non-logical axioms (e.g., a + b = b + a) are actually defining properties for the domain of a specific mathematical theory (such as arithmetic). When used in the latter sense, "axiom," "postulate", and "assumption" may be used interchangeably. In general, a non-logical axiom is not a self-evident truth, but rather a formal logical expression used in deduction to build a mathematical theory. To axiomatize a system of knowledge is to show that its claims can be derived from a small, well-understood set of sentences (the axioms). There are typically multiple ways to axiomatize a given mathematical domain . . .
Immediately, we see that this 101 view on conventional, materialism dominated views, is forced to accept the point that logic is a universal basis for reasoning. In going on to tryt to make axioms out to be in effect arbitrary defintitions, it fails to reckon withthe point that mere coherence is a powerful constraint on sets of possible axioms, and that fields of work are therefore constrained by the need for coherence and a related acknowledgement of observable mathematical facts, which may indeed be self-evident [e.g. the matchbook case on 2 + 2 = 4]. If axioms do not point to facts we observe in mathematics, we get suspicious. For good reason: proof by contradiction via reductio ad absurdum, is about reducing a suspect assertion to falsity by showing up an internal contradiction and/or a contradiction to a known mathematical fact. But, in an era that discounts and dismisses the significance of self-evident principles of right reason, we see ever more and more unacknowledged reduction to absurdity. Wiki gets it right this time:
Reductio ad absurdum (Latin: "reduction to the absurd") is a form of argument in which a proposition is disproven by following its implications logically to an absurd consequence.[1] A common species of reductio ad absurdum is proof by contradiction (also called indirect proof) where a proposition is proven true by proving that it is impossible for it to be false. For example, if A is false, then B is also false; but B is true, therefore A cannot be false and therefore A is true. This of course only works if it is not a false dichotomy. Meaning that, if there are other choices than true or false (like undefined or something entirely different), then this form of proof is a logical fallacy.
(And, onlookers, observe how these issues are being studiously ignored. That is not a coincidence.) GEM of TKIkairosfocus
September 16, 2010
September
09
Sep
16
16
2010
01:04 AM
1
01
04
AM
PDT
#250 Clive I think you will find that the Stanifer example is Barry's refutation of the statement: "All reasoning begins with the acknowledgement of self-evident truths" The Stanifer books includes a lot of reasoning. It does not begin with self-evident truths. QED. This is different from the statement "There are self-evident truths". I am also confused by the statements that you listed as being self-evident: Are you a human being? Are you alive? Do you live in the universe? Do you think? These are all obviously true - but self-evidently true? The examples that Stephenb has offered of self-evident truths have been general statements such as "every event has a cause". These are statements about an individual who may or may not be alive. I would have thought that they are far from self-evident. It requires observation or experience to confirm their truth.markf
September 16, 2010
September
09
Sep
16
16
2010
12:59 AM
12
12
59
AM
PDT
BarryR,
I have next to me Stansifer’s _The Study of Programming Languages_. I’m familiar with the chapters on Lambda calculus, denotational semantics and Hoare’s axiomatic approaches. All of these are methods of reasoning, none of these begin with an enumeration of self-evident truths. Thus, your statement is false.
Wow. Just wow. Do you mean Stanifer, the person who wrote on programming languages, is your answer to StephenB that there are not self evident truths? Are you a human being? Are you alive? Do you live in the universe? Do you think? All of these are self evident truths. All the way down to you using a keyboard and typing a sentence. Your thinking is so bizarre to me, you apparently think the world and all of its glorious multiplicity is reducible to programming computers. I would surely hope that a book on programming won't mention that there is no use in living any longer; if it did, I'm afraid you'd take that to heart too.Clive Hayden
September 15, 2010
September
09
Sep
15
15
2010
09:51 PM
9
09
51
PM
PDT
BarryR, That quote reminds me of the Keats line from Ode on a Grecian Urn: "Beauty is truth, truth beauty, that is all ye know on earth and all ye need to know"zeroseven
September 15, 2010
September
09
Sep
15
15
2010
09:11 PM
9
09
11
PM
PDT
CH@246 Bother, I've dropped another closing blockquote above.... sorry.
My point is that you pick and choose which philosophical aspects of StephenB that you’d like to see cited in peer review,
Certainly. Some of his mistakes are worth me correcting personally, some are worth me digging up a citation, and some should best be corrected by Stephan realizing he can't possibly find any support for that claim. Yes, I've run into people who start off conversations by demanding multiple citations. The best way to handle that is to provide them. When they see that tactic isn't working, they stop asking. In my case, I actually go off an read the citation I'm given, so just a handful is enough to slow me down.
It’s your opinion as to what qualifies as needing a citation in a discussion like this.
Certainly. But it's an opinion shared by many educated people. If Stephan is here to express an opinion, he's free to disregard mine. If he's here to change my mind (and I'm here to change his, or at least show him how to argue his point more effectively), then he can take my opinion and learn from it how I can be convinced. Preview seems to think this is ok, here goes....BarryR
September 15, 2010
September
09
Sep
15
15
2010
07:53 PM
7
07
53
PM
PDT
StephanB@235
All reasoning begins with the acknowledgement of self-evident truths.
Refutation of universals is trivial. Or, in more words, you have a bad habit of using words like "all" when staking a claim. This tells me you're not thinking about what you're saying. I have next to me Stansifer's _The Study of Programming Languages_. I'm familiar with the chapters on Lambda calculus, denotational semantics and Hoare's axiomatic approaches. All of these are methods of reasoning, none of these begin with an enumeration of self-evident truths. Thus, your statement is false. (You'll notice this bit of reasoning also failed to begin with self-evident truths, but there's no need to pile on.)
If you don’t believe that, try beginning with something else.
Hmmm.... like this?
In the lambda calculus are a collection of terms built out of variables, function symbols, and other lambda expressions. We suppose we have a countably infinite set of variables ... and an arbitrary set of function symbols... The only additional symbols used in the syntax of lambda expressions are the period, the Greek letter lambda, and the parentheses.
It works for me...
With respect to the problem that some say, “it’s not self-evident to me,” there is a simple answer to that objection. Such individuals are simply afraid or unwilling to acknowledge that which they do, indeed, know.
There's an even simpler answer: you're wrong. Suggestions on how we can go about disambiguating those two solutions?
It is possible to improve one’s understanding of basic logic without abandoning its principles.
That's true and actually profound; I'm also worried that it's a typo. In hopes that it isn't, yes, I agree that in order to understand logic we must be able to step outside of it. Logic isn't terribly useful in intellectual work --- it's far too clumsy. The ideas and most of the refutations come from intuition. Logic is useful for writing the results up afterwards. This is especially true in mathematics. Back when I was getting my Masters, artificial intelligence was all the rage and I took several grad classes in it. There was some work being done on automated theorem proving: give the computer the initial axioms and a theorem, and see if the computer could swizzle around the axioms in such a fashion to either prove or disprove the theorem. What computer scientists learned from this exercise (and mathematicians already knew) was that this kind of brute-force approach only works with the most trivial problems. The program didn't know how to step outside of the purely logical approach, and so was ultimately ineffective. (The proofs that they did generate were unreadable by anything not a computer, and I think that particular field has been abandoned.)
Once one “discards” the law of non-contradiction, he/she loses his capacity to reason in the abstract.
Hmmm... no, I don't think so. Quantum mechanics would be the standard counterexample, but I should be able to some up with something less esoteric if you can give me a formal definition of what you mean by non-contradiction.
Modern mathematics is founded on laws, as I have pointed out several times.
You have indeed pointed out several context-dependent laws, and I keep asking you things like "what does it mean to raise a graph to an exponent?" and "what does it mean to take the sine of a graph?". So no, you still haven't postulated any universal (context-free) laws. If you did (and postulating is the easy part) you'd still have to demonstrate their universality without recourse to an universal axioms. That's a really tough bootstrapping problem, and the fact that you still can't tell context-sensitive laws from universal laws tells me that solving that problem is well out of your reach. (I you reread the above carefully, you'll notice that I'm reasoning about universal laws without using universal laws. Might be a hint in there....)
All of Euclid’s elements are known to be self evident
What's self-evident about a set of coordinates describing a volumeless space? I don't find this self-evident at all. If you like, you can call me names, but I'm still not going to see this as self-evident, nor will you be able to prove it's self-evident. So as a personal belief, you're welcome to it. You're even free to believe that your personal beliefs are universal. You're not free to believe that you can construct an argument that would make them so.
Wikipedia alternates between rationality and irrationality depending on the subject matter.
I can recommend a textbook if you like.
You are assuming that since disputes about truth exist, that truth itself cannot exist.
Your inability to accurately summarize what I'm saying does get tiresome. If I made a claim that universal truth didn't exist, I have exactly the same problem that you do in claiming it does exist. Instead, I'm claiming that we have no way off accessing universal truth, and even if we did, there's no way it can be shared. Note that this is not claiming it's impossible --- I think that's true, but that's a personal belief. For the tools we have available to us now, though, we can't do it, and that's no great tragedy.
I like your sense of humor. Socrates, Plato, Aristotle, Aquinas, Adler, Maritain, Gilson, Copleson, Kreeft, ….I could fill the entire page.
One will do. Socrates and Plato aren't going to be helpful. Strike Aristotle as well. Aquinas is famous for concluding that logic could not be used to show the existence of God, thus leaving room for faith. So pick one of the others and give me chapter and verse (which is to say that I'll be looking it up, so please provide enough information to do so). As a reminder, you're looking for support of this statement:
As all rational people know, truth is the correspondence of the mind to reality
The reason you're not going to find anything is because philosophers and theologians learn in their freshman year (if not before) when not to use the word "all", and to avoid the implied ad hom in "rational people". But you're welcome to try....
—“It’s a really terrible definition: how do you go about ascertaining if something is true?
It’s true if it corresponds to reality.
Congratulations! All your prejudices are now true. (One of the first lessons learned in higher education is that humans suck at perceiving reality. We're tolerably good at basic questions of eating and reproduction. Beyond that, we have common sense, famously defined by Einstein as the collection of prejudices we possess at age 18. Education teaches you to be very, very wary of what you think reality is (especially the self-evident bits). It's almost always wrong (as you're demonstrating at length here).
Unfortunately, he is steeped in Kantian subjectivism and doesn’t understand logic. Why would you want to follow him when there are so many better thinkers around? Begin with Mortimer Adler.
Oh, I have read Adler. If you're not very curious about philosophy, he may be better than nothing. If you are curious, you find you outgrow him in pretty short order.
BarryR
September 15, 2010
September
09
Sep
15
15
2010
07:43 PM
7
07
43
PM
PDT
BarryR,
Randolph Smith dedicates and entire chapter to “Documenting your scholarship” in his _Guide to Publishing in Psychology Journals_. He cites Bruner(1942) as follows: a sin one more degree heinous than an incomplete reference is an inaccurate reference; the former will be caught by the editor or printer, whereas the latter will stand in print as an annoyance to future investigators and a monument to the writer’s carelessness.
I mean anything at all being discussed, not research attempting to be published. My point is that you pick and choose which philosophical aspects of StephenB that you'd like to see cited in peer review, but this selection of yours is arbitrary, and you don't have a peer reviewed citation yourself that guides you in how you make these arbitrary requirements of him needing to provide a citation. It's really just another evasion, a red herring, something to pull out of the ol box of tricks as an argument maneuver when you do not or cannot take the argument head-on.
I’ll get a citation and learn something without losing face, or everyone else reading the exchange will note the lack of citation and conclude that the author was expressing a personal opinion.
So what if he was? It's your opinion as to what qualifies as needing a citation in a discussion like this. I don't see a citation that speaks for your opinion either. It's an evasion tactic.
I’ve forwarded your question along to the folks at talk.origins. I’m interested to see if they can unearth if science got the practice from the legal profession of if it work the other way ’round.
I can't wait. :)Clive Hayden
September 15, 2010
September
09
Sep
15
15
2010
07:12 PM
7
07
12
PM
PDT
1 4 5 6 7 8 15

Leave a Reply